

STAFFXPERT LLC
Data Engineer (Azure + Databricks)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Azure + Databricks) in Chicago, IL (Hybrid – 3 days onsite) with a contract length of "unknown" and a pay rate of "unknown". Requires 5+ years in Data Engineering, 3+ years with ETL/ELT, and strong SQL, PySpark, and Python skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 26, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#.Net #Documentation #SQL (Structured Query Language) #ADLS (Azure Data Lake Storage) #Complex Queries #SQL Queries #Azure #Azure SQL #Data Engineering #Scala #Data Pipeline #Java #Computer Science #Cloud #Monitoring #Triggers #Datasets #TypeScript #Logging #ADF (Azure Data Factory) #PySpark #"ETL (Extract #Transform #Load)" #Azure Data Factory #Code Reviews #Delta Lake #Databricks #Infrastructure as Code (IaC) #Python #R #Spark (Apache Spark)
Role description
Job Title: Data Engineer (Azure + Databricks)
Location: Chicago, IL (Hybrid – 3 days onsite)
Need LOACL candidate
Job Summary
STAFFXPERT LLC is seeking a Data Engineer (Azure + Databricks) on behalf of our client in Chicago, IL. This role is ideal for a hands-on data professional with strong experience building scalable data pipelines within an Azure ecosystem. You will be responsible for designing, developing, and maintaining robust data solutions that support critical business operations and analytics.
Key Responsibilities
• Design, build, and maintain end-to-end data pipelines, including ingestion, transformation, validation, and publishing
• Develop and optimize complex SQL queries and PySpark transformations for large-scale datasets
• Build production-grade Python modules with robust logging, error handling, and testing practices
• Create and manage Azure Data Factory (ADF) pipelines, including triggers, parameterization, and monitoring
• Work with Azure services such as ADLS Gen2, Azure SQL, and related components
• Provision and manage cloud infrastructure using Infrastructure as Code (IaC) tools
• Optimize Databricks clusters, jobs, and workflows for performance and reliability
• Participate in code reviews, documentation, and production support activities
• Collaborate with cross-functional teams to troubleshoot and resolve data-related issues
Required Qualifications
• Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience)
• 5+ years of experience in Data Engineering
• 3+ years of experience with ETL/ELT processes, PySpark, and advanced SQL
• Strong expertise in SQL, including complex queries, joins, CTEs, and performance tuning
• 2+ years of hands-on Python development in production environments
• Experience with Databricks (notebooks, workflows, jobs, Delta Lake)
• Experience with Azure Data Factory (pipeline development and monitoring)
• Hands-on experience with Azure services such as ADLS Gen2 and Azure SQL
• Solid understanding of data pipeline architecture, incremental loads, and data validation
Preferred Qualifications
• Experience with Infrastructure as Code tools (e.g., Pulumi)
• Knowledge of Delta Lake architecture
• Exposure to R for data validation
• Familiarity with TypeScript, Java, or .NET
• Basic troubleshooting experience with modern web or backend applications
Work Environment
• Hybrid work model with onsite collaboration required multiple days per week
• Opportunity to work on scalable, cloud-based data platforms with modern technologies
Job Title: Data Engineer (Azure + Databricks)
Location: Chicago, IL (Hybrid – 3 days onsite)
Need LOACL candidate
Job Summary
STAFFXPERT LLC is seeking a Data Engineer (Azure + Databricks) on behalf of our client in Chicago, IL. This role is ideal for a hands-on data professional with strong experience building scalable data pipelines within an Azure ecosystem. You will be responsible for designing, developing, and maintaining robust data solutions that support critical business operations and analytics.
Key Responsibilities
• Design, build, and maintain end-to-end data pipelines, including ingestion, transformation, validation, and publishing
• Develop and optimize complex SQL queries and PySpark transformations for large-scale datasets
• Build production-grade Python modules with robust logging, error handling, and testing practices
• Create and manage Azure Data Factory (ADF) pipelines, including triggers, parameterization, and monitoring
• Work with Azure services such as ADLS Gen2, Azure SQL, and related components
• Provision and manage cloud infrastructure using Infrastructure as Code (IaC) tools
• Optimize Databricks clusters, jobs, and workflows for performance and reliability
• Participate in code reviews, documentation, and production support activities
• Collaborate with cross-functional teams to troubleshoot and resolve data-related issues
Required Qualifications
• Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience)
• 5+ years of experience in Data Engineering
• 3+ years of experience with ETL/ELT processes, PySpark, and advanced SQL
• Strong expertise in SQL, including complex queries, joins, CTEs, and performance tuning
• 2+ years of hands-on Python development in production environments
• Experience with Databricks (notebooks, workflows, jobs, Delta Lake)
• Experience with Azure Data Factory (pipeline development and monitoring)
• Hands-on experience with Azure services such as ADLS Gen2 and Azure SQL
• Solid understanding of data pipeline architecture, incremental loads, and data validation
Preferred Qualifications
• Experience with Infrastructure as Code tools (e.g., Pulumi)
• Knowledge of Delta Lake architecture
• Exposure to R for data validation
• Familiarity with TypeScript, Java, or .NET
• Basic troubleshooting experience with modern web or backend applications
Work Environment
• Hybrid work model with onsite collaboration required multiple days per week
• Opportunity to work on scalable, cloud-based data platforms with modern technologies






