

Motion Recruitment
Senior Azure Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Azure Data Engineer, 6-month contract, paying "$X/hour". Located in Downtown Chicago (3 days onsite). Requires 5+ years in data engineering, 3+ years in SQL and ETL/ELT, and proficiency in PySpark and Azure Data Factory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
680
-
ποΈ - Date
March 25, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Chicago, IL
-
π§ - Skills detailed
#ADLS (Azure Data Lake Storage) #Java #Databricks #PySpark #.Net #Logging #ML (Machine Learning) #Debugging #TypeScript #Data Quality #AI (Artificial Intelligence) #Azure SQL #Triggers #Monitoring #Code Reviews #Python #ADF (Azure Data Factory) #Spring Boot #Angular #Infrastructure as Code (IaC) #Data Pipeline #Documentation #R #"ETL (Extract #Transform #Load)" #Data Engineering #Datasets #Delta Lake #Spark (Apache Spark) #SQL (Structured Query Language) #Azure Data Factory #Azure #Azure Portal #Automation #Storage
Role description
About the job:
Our client, the nationβs largest provider-driven healthcare performance improvement company, is looking for a Sr. Azure Data Engineer with solid, real-time hands-on experience in:
β’ PySpark
β’ Databricks (notebooks, jobs, workflows, External/Managed tables, SQL Warehouse, AI/ML)
β’ Databricks Cluster optimization
β’ ADLS (Blob Storage)
β’ Azure Data Factory
β’ Azure SQL
β’ Azure App Services
Title: Senior Azure Data Engineer
Location: Downtown Chicago/60606 (3 days onsite Tues, Wed, Thurs)
What you will be doing:
Key Responsibilities
β’ Design, build, and support end-to-end data pipelines (ingestion, transformation, validation, publishing).
β’ Develop and optimize SQL and PySpark/Databricks transformations for large datasets.
β’ Build production-grade Python components (reusable modules, logging, error handling, testing).
β’ Create and maintain Azure Data Factory (ADF) pipelines (triggers, parameterization, monitoring, failure handling).
β’ Work within Azure environments (ADLS Gen2, Azure SQL, resource groups, portal operations).
β’ Provision and maintain Azure components using Pulumi (Infrastructure as Code).
β’ Participate in code reviews, documentation, and operational support (triage + root cause analysis).
Must-have Skills (Required)
β’ 5+ years' experience as a data engineer
β’ 3+ years doing ETL / ELT Concepts: Strong understanding of pipeline patterns, incremental loads, data validation, and troubleshooting.
β’ 3+ years in SQL: Advanced querying (CTEs, views, joins, complex query logic) and performance tuning for transformations and validation.
β’ 2+ years using Python: Production-quality development (modular code, testing, logging, integration with APIs/files, CICD, Unit Test/Integration test automation, Code Coverage).
β’ PySpark: Distributed transformations and performance optimization (joins, partitions, debugging), CICD, Unit Test/Integration test automation, Code Coverage.
β’ 1+ years using Azure Data Factory (ADF): Build/operate ADF pipelines, parameterization, triggers, monitoring, retry/error handling; integrate with Databricks/ADLS.
β’ 1+ year using Databricks: Develop and operationalize notebooks/jobs/workflows; Delta Lake patterns; basic cluster/job configuration.
β’ Azure Fundamentals + Pulumi: Hands-on with ADLS Gen2, Azure Portal, Storage Explorer, Resource Groups, Azure SQL, and familiarity integrating with Azure OpenAI. Able to use/maintain Pulumi scripts for provisioning and managing Azure resources across environments.
Nice-to-have Skills
β’ R: Ability to support/translate validation rules with SQL scripts and create data quality reports.
β’ TypeScript: Useful for pulumi pipeline to create Azure components.
β’ Java: Useful for integration with existing services/components.
β’ .NET: Useful for integration with existing services/components.
β’ Angular / Spring Boot: Minor troubleshooting or coordination with app teams.
About the job:
Our client, the nationβs largest provider-driven healthcare performance improvement company, is looking for a Sr. Azure Data Engineer with solid, real-time hands-on experience in:
β’ PySpark
β’ Databricks (notebooks, jobs, workflows, External/Managed tables, SQL Warehouse, AI/ML)
β’ Databricks Cluster optimization
β’ ADLS (Blob Storage)
β’ Azure Data Factory
β’ Azure SQL
β’ Azure App Services
Title: Senior Azure Data Engineer
Location: Downtown Chicago/60606 (3 days onsite Tues, Wed, Thurs)
What you will be doing:
Key Responsibilities
β’ Design, build, and support end-to-end data pipelines (ingestion, transformation, validation, publishing).
β’ Develop and optimize SQL and PySpark/Databricks transformations for large datasets.
β’ Build production-grade Python components (reusable modules, logging, error handling, testing).
β’ Create and maintain Azure Data Factory (ADF) pipelines (triggers, parameterization, monitoring, failure handling).
β’ Work within Azure environments (ADLS Gen2, Azure SQL, resource groups, portal operations).
β’ Provision and maintain Azure components using Pulumi (Infrastructure as Code).
β’ Participate in code reviews, documentation, and operational support (triage + root cause analysis).
Must-have Skills (Required)
β’ 5+ years' experience as a data engineer
β’ 3+ years doing ETL / ELT Concepts: Strong understanding of pipeline patterns, incremental loads, data validation, and troubleshooting.
β’ 3+ years in SQL: Advanced querying (CTEs, views, joins, complex query logic) and performance tuning for transformations and validation.
β’ 2+ years using Python: Production-quality development (modular code, testing, logging, integration with APIs/files, CICD, Unit Test/Integration test automation, Code Coverage).
β’ PySpark: Distributed transformations and performance optimization (joins, partitions, debugging), CICD, Unit Test/Integration test automation, Code Coverage.
β’ 1+ years using Azure Data Factory (ADF): Build/operate ADF pipelines, parameterization, triggers, monitoring, retry/error handling; integrate with Databricks/ADLS.
β’ 1+ year using Databricks: Develop and operationalize notebooks/jobs/workflows; Delta Lake patterns; basic cluster/job configuration.
β’ Azure Fundamentals + Pulumi: Hands-on with ADLS Gen2, Azure Portal, Storage Explorer, Resource Groups, Azure SQL, and familiarity integrating with Azure OpenAI. Able to use/maintain Pulumi scripts for provisioning and managing Azure resources across environments.
Nice-to-have Skills
β’ R: Ability to support/translate validation rules with SQL scripts and create data quality reports.
β’ TypeScript: Useful for pulumi pipeline to create Azure components.
β’ Java: Useful for integration with existing services/components.
β’ .NET: Useful for integration with existing services/components.
β’ Angular / Spring Boot: Minor troubleshooting or coordination with app teams.






