Creative Information Technology, Inc.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "$XX/hour." It requires 3+ years of AWS experience, proficiency in Python and SQL, and expertise in ETL processes, healthcare data integration, and data architecture.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 6, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Baltimore, MD
-
🧠 - Skills detailed
#Automation #Cloud #Big Data #Documentation #Terraform #Apache Spark #JSON (JavaScript Object Notation) #Data Lake #Data Modeling #S3 (Amazon Simple Storage Service) #Statistics #EDW (Enterprise Data Warehouse) #Computer Science #Data Warehouse #PySpark #Spark (Apache Spark) #Data Integration #Java #Scala #Infrastructure as Code (IaC) #Distributed Computing #Airflow #"ETL (Extract #Transform #Load)" #Athena #Storage #Metadata #Data Architecture #Data Pipeline #Data Quality #Dimensional Data Models #Indexing #Programming #Data Processing #Data Lakehouse #XML (eXtensible Markup Language) #AWS Glue #Compliance #AWS (Amazon Web Services) #IAM (Identity and Access Management) #Mathematics #Redshift #Snowflake #CMS (Content Management System) #Data Mart #Delta Lake #Debugging #FHIR (Fast Healthcare Interoperability Resources) #Security #Data Engineering #Python #SQL (Structured Query Language) #Data Science
Role description
About Us Creative Information Technology Inc. (CITI) is an esteemed IT enterprise renowned for its exceptional customer service and innovation. We serve both government and commercial sectors, offering a range of solutions such as Healthcare IT, Human Services, Identity Credentialing, Cloud Computing, and Big Data Analytics. With clients in the US and abroad, we hold key contract vehicles, including GSA IT Schedule 70, NIH CIO-SP3, GSA Alliant, and DHS Eagle II. Join us in driving growth and seizing new business opportunities! Position Description: Background: Maryland Department of Health is seeking a hands-on Data Engineer to design, develop, and optimize large-scale data pipelines in support of our Enterprise Data Warehouse (EDW) and Data Lake solutions. This role requires deep technical expertise in coding, pipeline orchestration, and cloud-native data engineering on AWS. The Data Engineer will be directly responsible for implementing ingestion, transformation, and integration workflows β€” ensuring data is high-quality, compliant, and analytics-ready. This role may support other projects or teams within MDH as needed. Responsible for designing, building, and maintaining data pipelines and infrastructure to support data-driven decisions and analytics. The individual is responsible for the following tasks: A. Design, develop and maintain data pipelines, and extract, transform, load (ETL) processes to collect, process and store structured and unstructured data B. Build data architecture and storage solutions, including data lake houses, data lakes, data warehouse, and data marts to support analytics and reporting C. Develop data reliability, efficiency, and qualify checks and processes D. Prepare data for data modeling E. Monitor and optimize data architecture and data processing systems F. Collaboration with multiple teams to understand requirements and objectives G. Administer testing and troubleshooting related to performance, reliability, and scalability H. Create and update documentation Duties / Responsibilities Hands-On Data Pipeline Development Design, code, and deploy ETL/ELT pipelines across bronze, silver, and gold layers of the Data Lakehouse. Build ingestion pipelines for structured (SQL), semi-structured (JSON, XML), and unstructured data using PySpark/Python programming language using AWS Glue or EMR. Implement incremental loads, deduplication, error handling, and data validation. Actively troubleshoot, debug, and optimize pipelines for scalability and cost efficiency. EDW & Data Lake Implementation Develop dimensional data models (Star Schema, Snowflake Schema) for analytics and reporting. Build and maintain tables in Iceberg, Delta Lake, or equivalent OTF formats. Optimize partitioning, indexing, and metadata for fast query performance. Healthcare Data Integration Build ingestion and transformation pipelines for EDI X12 transactions (837, 835, 278, etc.). Implement mapping and transformation of EDI data with FHIR and HL7 frameworks. Work hands-on with AWS Health Lake (or equivalent) to store and query healthcare data. Data Quality, Security & Compliance Develop automated validation scripts to enforce data quality and integrity. Implement IAM roles, encryption, and auditing to meet HIPAA and CMS compliance standards. Maintain lineage and governance documentation for all pipelines. Collaboration & Delivery Work closely with the Lead Data Engineer, analysts, and data scientists to deliver pipelines that support enterprise-wide analytics. Actively contribute to CI/CD pipelines, Infrastructure-as-Code (IaC), and automation. Continuously improve pipelines and adopt new technologies where appropriate. Minimum Qualifications Specialized experience: The candidate should have experience as data engineer or similar role with a strong understanding of data architecture and ETL processes. The candidate should be proficient in programming languages for data processing and knowledgeable of distributed computing and parallel processing. β€’ 3+ years hands-on experience in building, deploying, and maintaining data pipelines on AWS or equivalent cloud platforms. β€’ Strong coding skills in Python and SQL (Scala or Java a plus). β€’ Proven experience with Apache Spark (PySpark) for large-scale processing. β€’ Hands-on experience with AWS Glue, S3, Redshift, Athena, EMR, Lake Formation. β€’ Strong debugging and performance optimization skills in distributed systems. β€’ Hands-on experience with Iceberg, Delta Lake, or other OTF table formats. β€’ Experience with Airflow or other pipeline orchestration frameworks. β€’ Practical experience in CI/CD and Infrastructure-as-Code (Terraform, CloudFormation). β€’ Practical experience with EDI X12, HL7, or FHIR data formats. β€’ Strong understanding of Medallion Architecture for data lake houses. β€’ Hands-on experience building dimensional models and data warehouses. β€’ Working knowledge of HIPAA and CMS interoperability requirements. Education: This position requires a bachelor’s or master’s degree from an accredited college or university with a major in computer science, statistics, mathematics, economics, or a related field. Three (3) years of equivalent experience in a related field may be substituted for the Bachelor’s degree.