

Data/ETL Developer (Hybrid)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data/ETL Developer (Hybrid) for up to 2 years in Baltimore or Linthicum, MD, offering a competitive pay rate. Requires 5+ years of ETL coding experience, proficiency in Python and SQL, and expertise in AWS services.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 24, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Baltimore, MD
-
🧠 - Skills detailed
#Data Mart #Data Lakehouse #Distributed Computing #Programming #"ETL (Extract #Transform #Load)" #Data Ingestion #Cloud #CMS (Content Management System) #Amazon Redshift #Data Lake #Data Pipeline #Leadership #Redshift #Python #RDBMS (Relational Database Management System) #Compliance #RDS (Amazon Relational Database Service) #Apache Spark #Computer Science #SQL (Structured Query Language) #Database Performance #Data Engineering #Data Migration #AWS Glue #Business Analysis #Data Architecture #Data Integration #Automation #Agile #S3 (Amazon Simple Storage Service) #OpenSearch #Spark (Apache Spark) #Athena #Data Management #Data Modeling #Mathematics #Security #Data Warehouse #Databases #Scala #NoSQL #Data Processing #GitHub #API (Application Programming Interface) #DynamoDB #AWS (Amazon Web Services) #Migration #Storage #BI (Business Intelligence) #Documentation #Statistics #Data Integrity
Role description
Job Title: Data/ETL Developer (Hybrid with 40% onsite)
Location: Baltimore City, MD or Linthicum, MD
Duration: up to 2 Years
Client - Maryland Department of Health (MDH)
The MDH office of Enterprise Technology
MD Think Benefits Department - Center of Medicare & Medicaid Services (CMS), Worker Portal Project
Position Description: Responsible for designing, building, and maintaining data pipelines and infrastructure to support data-driven decisions and analytics.
The individual is responsible for the following tasks:
• Design, develop and maintain data pipelines, and extract, transform, load (ETL) processes to collect, process and store structured and unstructured data
• Build data architecture and storage solutions, including data lakehouses, data lakes, data warehouse, and data marts to support analytics and reporting
• Develop data reliability, efficiency, and qualify checks and processes
• Prepare data for data modeling
• Monitor and optimize data architecture and data processing systems
• Collaboration with multiple teams to understand requirements and objectives
• Administer testing and troubleshooting related to performance, reliability, and scalability
• Create and update documentation
Additional Responsibilities: In addition to the responsibilities listed above, the individual will also be expected to perform the following:
Data Architecture and Modeling:
• Design and implement robust, scalable data models to support PMM application, analytics, and business intelligence initiatives
• Optimize data warehousing solutions and manage data migrations in the AWS ecosystem, utilizing Amazon Redshift, RDS, and DocumentDB services
ETL Development:
• Develop and maintain scalable ETL pipelines using AWS Glue and other AWS services to enhance data collection, integration, and aggregation
• Ensure data integrity and timeliness in the data pipeline, troubleshooting any issues that arise during data processing
Data Integration:
• Integrate data from various sources using AWS technologies, ensuring seamless data flow across systems
• Collaborate with stakeholders to define data ingestion requirements and implement solutions to meet business needs
Performance Optimization:
• Monitor, tune, and manage database performance to ensure efficient data loads and queries
• Implement best practices for data management within AWS to optimize storage and computing costs
Security and Compliance:
• Ensure all data practices comply with regulatory requirements and department policies
• Implement and maintain security measures to protect data within AWS services
Team Collaboration and Leadership:
• Lead and mentor junior data engineers and team members on AWS best practices and technical challenges
• Collaborate with UI/API team, business analysts, and other stakeholders to support data-driven decision- making
Innovation and Continuous Improvement:
• Explore and adopt new technologies within the AWS cloud to enhance the capabilities of the data platform
• Continuously improve existing systems by analyzing business needs and technology trends
Education:
• This position requires a bachelor’s or master’s degree from an accredited college or university with a major in computer science, statistics, mathematics, economics, or related field.
• Three (3) years of equivalent experience in a related field may be substituted for the Bachelor’s degree.
General Experience:
• The proposed candidate must have a minimum of three (3) years of experience as a data engineer.
Specialized experience:
1. The candidate should have experience as data engineer or similar role with a strong understanding of data architecture and ETL processes.
1. The candidate should be proficient in programming languages for data processing and knowledgeable of distributed computing and parallel processing.
• Minimum 5 + years ETL coding experience
• Proficiency in programming languages such as Python and SQL for data processing and automation
• Experience with distributed computing frameworks like Apache Spark or similar technologies
• Experience with AWS data environment, primarily Glue, S3, DocumentDB, Redshift, RDS, Athena, etc.
• Experience with data warehouses/RDBMS like Redshift and NoSQL data stores such as DocumentDB, DynamoDB, OpenSearch, etc
• Experience in building data lakes using AWS Lake Formation
• Experience with workflow orchestration and scheduling tools like AWS Step Functions, AWS MWAA, etc..
• Strong understanding of relational databases (including tables, views, indexes, table spaces)
• Experience with source control tools such as GitHub and related CI/CD processes
• Ability to analyze a company’s data needs
• Strong problem-solving skills
• Experience with the SDLC and Agile methodologies
Job Title: Data/ETL Developer (Hybrid with 40% onsite)
Location: Baltimore City, MD or Linthicum, MD
Duration: up to 2 Years
Client - Maryland Department of Health (MDH)
The MDH office of Enterprise Technology
MD Think Benefits Department - Center of Medicare & Medicaid Services (CMS), Worker Portal Project
Position Description: Responsible for designing, building, and maintaining data pipelines and infrastructure to support data-driven decisions and analytics.
The individual is responsible for the following tasks:
• Design, develop and maintain data pipelines, and extract, transform, load (ETL) processes to collect, process and store structured and unstructured data
• Build data architecture and storage solutions, including data lakehouses, data lakes, data warehouse, and data marts to support analytics and reporting
• Develop data reliability, efficiency, and qualify checks and processes
• Prepare data for data modeling
• Monitor and optimize data architecture and data processing systems
• Collaboration with multiple teams to understand requirements and objectives
• Administer testing and troubleshooting related to performance, reliability, and scalability
• Create and update documentation
Additional Responsibilities: In addition to the responsibilities listed above, the individual will also be expected to perform the following:
Data Architecture and Modeling:
• Design and implement robust, scalable data models to support PMM application, analytics, and business intelligence initiatives
• Optimize data warehousing solutions and manage data migrations in the AWS ecosystem, utilizing Amazon Redshift, RDS, and DocumentDB services
ETL Development:
• Develop and maintain scalable ETL pipelines using AWS Glue and other AWS services to enhance data collection, integration, and aggregation
• Ensure data integrity and timeliness in the data pipeline, troubleshooting any issues that arise during data processing
Data Integration:
• Integrate data from various sources using AWS technologies, ensuring seamless data flow across systems
• Collaborate with stakeholders to define data ingestion requirements and implement solutions to meet business needs
Performance Optimization:
• Monitor, tune, and manage database performance to ensure efficient data loads and queries
• Implement best practices for data management within AWS to optimize storage and computing costs
Security and Compliance:
• Ensure all data practices comply with regulatory requirements and department policies
• Implement and maintain security measures to protect data within AWS services
Team Collaboration and Leadership:
• Lead and mentor junior data engineers and team members on AWS best practices and technical challenges
• Collaborate with UI/API team, business analysts, and other stakeholders to support data-driven decision- making
Innovation and Continuous Improvement:
• Explore and adopt new technologies within the AWS cloud to enhance the capabilities of the data platform
• Continuously improve existing systems by analyzing business needs and technology trends
Education:
• This position requires a bachelor’s or master’s degree from an accredited college or university with a major in computer science, statistics, mathematics, economics, or related field.
• Three (3) years of equivalent experience in a related field may be substituted for the Bachelor’s degree.
General Experience:
• The proposed candidate must have a minimum of three (3) years of experience as a data engineer.
Specialized experience:
1. The candidate should have experience as data engineer or similar role with a strong understanding of data architecture and ETL processes.
1. The candidate should be proficient in programming languages for data processing and knowledgeable of distributed computing and parallel processing.
• Minimum 5 + years ETL coding experience
• Proficiency in programming languages such as Python and SQL for data processing and automation
• Experience with distributed computing frameworks like Apache Spark or similar technologies
• Experience with AWS data environment, primarily Glue, S3, DocumentDB, Redshift, RDS, Athena, etc.
• Experience with data warehouses/RDBMS like Redshift and NoSQL data stores such as DocumentDB, DynamoDB, OpenSearch, etc
• Experience in building data lakes using AWS Lake Formation
• Experience with workflow orchestration and scheduling tools like AWS Step Functions, AWS MWAA, etc..
• Strong understanding of relational databases (including tables, views, indexes, table spaces)
• Experience with source control tools such as GitHub and related CI/CD processes
• Ability to analyze a company’s data needs
• Strong problem-solving skills
• Experience with the SDLC and Agile methodologies