The Planet Group

Azure - Sr. Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Azure - Sr. Data Engineer" in Chicago Downtown for 6+ months at $65-73/hour W2. Requires 7+ years data engineering experience, 3+ years in ETL/ELT, SQL, and Azure Data Factory, plus Python and PySpark expertise.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 16, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Datasets #Spark (Apache Spark) #Infrastructure as Code (IaC) #SQL (Structured Query Language) #ADLS (Azure Data Lake Storage) #Data Pipeline #Code Reviews #Documentation #Databricks #Data Engineering #Azure Data Factory #Logging #PySpark #ADF (Azure Data Factory) #Azure #Azure SQL #Automation #Monitoring #Triggers #Python #"ETL (Extract #Transform #Load)"
Role description
Local candidates only - in-person interview is a MUST Duration - 6+ months Location - Chicago Downtown - onsite Tue, Wed and Thu Rate - 65-73/hour W2 Key Responsibilities • Design, build, and support end-to-end data pipelines (ingestion, transformation, validation, publishing). • Develop and optimize SQL and PySpark/Databricks transformations for large datasets. • Build production-grade Python components (reusable modules, logging, error handling, testing). • Create and maintain Azure Data Factory (ADF) pipelines (triggers, parameterization, monitoring, failure handling). • Work within Azure environments (ADLS Gen2, Azure SQL, resource groups, portal operations). • Provision and maintain Azure components using Pulumi (Infrastructure as Code). • Participate in code reviews, documentation, and operational support Must-have Skills (Required) • 7+ years experience as a data engineer • 3+ years doing ETL / ELT Concepts: Strong understanding of pipeline patterns, incremental loads, data validation, and troubleshooting. • 3+ years in SQL: Advanced querying (CTEs, views, joins, complex query logic) and performance tuning for transformations and validation. • 2+ years using Python: Production-quality development (modular code, testing, logging, integration with APIs/files, CICD, Unit Test/Integration test automation, Code Coverage). • PySpark: Distributed transformations and performance optimization, CICD, Unit Test/Integration test automation, Code Coverage. • 2+ years using Azure Data Factory (ADF) • 2+ year using Databricks #TECH #Chicagohybrid