

Myticas Consulting
Sr. Data Engineer - Python Developer (35012)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer - Python Developer, offering $53-62/hr for a 12++ month remote contract with occasional travel to Springfield, IL. Requires 5+ years in ETL, Python, and healthcare datasets; Azure certifications preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
496
-
ποΈ - Date
May 1, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Illinois, United States
-
π§ - Skills detailed
#Databricks #ADF (Azure Data Factory) #Security #Data Ingestion #Data Pipeline #BI (Business Intelligence) #Teradata #"ETL (Extract #Transform #Load)" #Data Processing #Scripting #Informatica PowerCenter #EDW (Enterprise Data Warehouse) #Data Quality #Bash #Azure Data Factory #Data Warehouse #Spark (Apache Spark) #Azure DevOps #SQL Server #Migration #Azure #Databases #Code Reviews #Consulting #Scala #Data Engineering #Oracle #Automation #GitHub #Informatica #Snowflake #SQL (Structured Query Language) #Cloud #Data Modeling #Computer Science #DevOps #REST (Representational State Transfer) #Python #Datasets
Role description
Clover Consulting has a direct client in Springfield, IL in need of an ETL/Python Developer for a long-term contract role. This is a remote position, but may require occasional travel to Springfield.
TITLE: ETL/Python Developer
RATE: $53-62/hr
TERM: 12++ Months, potential to go longer
LOCATION: Remote with potential, occasional travel to Springfield, IL
Position Summary
Seeking a hands-on Senior Data Engineer (ETL / Python Developer) to support the Enterprise Data Warehouse (EDW) and Analytics Program. This role plays a critical part in designing, developing, and maintaining scalable data ingestion and transformation pipelines that support Medicaid analytics, federal reporting, and enterprise decision support.
The ideal candidate brings strong ETL and Python engineering expertise, experience working with large healthcare datasets, and the ability to operate effectively in a regulated, audit-sensitive environment. This is a delivery-focused role requiring close collaboration with architects, analysts, QA, PMs, SMEs, developers, and reporting BI teams across both legacy and cloud-based platforms.
Primary Responsibilities
β’ Design, develop, and maintain enterprise ETL pipelines using Azure Data Factory (ADF), Informatica PowerCenter, and Python-based frameworks
β’ Build and optimize scalable data processing solutions using Python, Spark, and Databricks
β’ Support Medicaid analytics and federal reporting initiatives (e.g., T-MSIS, PERM, MARS, Quality of Care)
β’ Develop robust data validation, reconciliation, and audit-traceable data pipelines
β’ Write and optimize SQL and stored procedures across relational platforms such as Snowflake, Oracle, and SQL Server
β’ Participate in cloud migration and modernization initiatives within Azure-based architectures
β’ Collaborate with analysts, QA, and reporting teams to ensure data quality, accuracy, and timeliness
β’ Follow data engineering best practices for performance, reliability, reusability, and security
β’ Support production operations, incident resolution, and root-cause analysis
β’ Participate in code reviews, source control, and CI/CD processes using Azure DevOps and GitHub
Required Qualifications
β’ 5+ years of data engineering experience with a focus on enterprise data warehousing
β’ 5+ years of hands-on ETL development using Informatica PowerCenter, Azure Data Factory, or similar tools
β’ 5+ years of Python development for data engineering and automation
β’ 3+ years of experience with Spark-based processing frameworks (Databricks or equivalent)
β’ Strong SQL expertise and experience with relational databases (such as Teradata, Snowflake, Oracle, SQL Server)
β’ Experience with source control and DevOps practices (Azure DevOps, GitHub, CI/CD)
β’ Bachelorβs degree or higher in Computer Science, Engineering, Analytics, or a related field
β’ Strong analytical, problem-solving, and troubleshooting skills
Preferred Qualifications
β’ Experience supporting State Medicaid EDW or MMIS analytics environments
β’ Healthcare or public-sector analytics experience (Medicaid / Medicare preferred)
β’ Data modeling experience in enterprise data warehouse environments
β’ Scripting experience (PowerShell, Bash) for automation and orchestration
β’ Experience designing or consuming APIs (REST) within data platforms
β’ Familiarity with data quality frameworks, reconciliation, and audit support
β’ Azure certifications related to data engineering or analytics
,
Clover Consulting has a direct client in Springfield, IL in need of an ETL/Python Developer for a long-term contract role. This is a remote position, but may require occasional travel to Springfield.
TITLE: ETL/Python Developer
RATE: $53-62/hr
TERM: 12++ Months, potential to go longer
LOCATION: Remote with potential, occasional travel to Springfield, IL
Position Summary
Seeking a hands-on Senior Data Engineer (ETL / Python Developer) to support the Enterprise Data Warehouse (EDW) and Analytics Program. This role plays a critical part in designing, developing, and maintaining scalable data ingestion and transformation pipelines that support Medicaid analytics, federal reporting, and enterprise decision support.
The ideal candidate brings strong ETL and Python engineering expertise, experience working with large healthcare datasets, and the ability to operate effectively in a regulated, audit-sensitive environment. This is a delivery-focused role requiring close collaboration with architects, analysts, QA, PMs, SMEs, developers, and reporting BI teams across both legacy and cloud-based platforms.
Primary Responsibilities
β’ Design, develop, and maintain enterprise ETL pipelines using Azure Data Factory (ADF), Informatica PowerCenter, and Python-based frameworks
β’ Build and optimize scalable data processing solutions using Python, Spark, and Databricks
β’ Support Medicaid analytics and federal reporting initiatives (e.g., T-MSIS, PERM, MARS, Quality of Care)
β’ Develop robust data validation, reconciliation, and audit-traceable data pipelines
β’ Write and optimize SQL and stored procedures across relational platforms such as Snowflake, Oracle, and SQL Server
β’ Participate in cloud migration and modernization initiatives within Azure-based architectures
β’ Collaborate with analysts, QA, and reporting teams to ensure data quality, accuracy, and timeliness
β’ Follow data engineering best practices for performance, reliability, reusability, and security
β’ Support production operations, incident resolution, and root-cause analysis
β’ Participate in code reviews, source control, and CI/CD processes using Azure DevOps and GitHub
Required Qualifications
β’ 5+ years of data engineering experience with a focus on enterprise data warehousing
β’ 5+ years of hands-on ETL development using Informatica PowerCenter, Azure Data Factory, or similar tools
β’ 5+ years of Python development for data engineering and automation
β’ 3+ years of experience with Spark-based processing frameworks (Databricks or equivalent)
β’ Strong SQL expertise and experience with relational databases (such as Teradata, Snowflake, Oracle, SQL Server)
β’ Experience with source control and DevOps practices (Azure DevOps, GitHub, CI/CD)
β’ Bachelorβs degree or higher in Computer Science, Engineering, Analytics, or a related field
β’ Strong analytical, problem-solving, and troubleshooting skills
Preferred Qualifications
β’ Experience supporting State Medicaid EDW or MMIS analytics environments
β’ Healthcare or public-sector analytics experience (Medicaid / Medicare preferred)
β’ Data modeling experience in enterprise data warehouse environments
β’ Scripting experience (PowerShell, Bash) for automation and orchestration
β’ Experience designing or consuming APIs (REST) within data platforms
β’ Familiarity with data quality frameworks, reconciliation, and audit support
β’ Azure certifications related to data engineering or analytics
,





