

nTech Workforce
Data Engineer -- W2 ONLY
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 12-month W2 contract, hybrid in Oak Brook, IL. Requires 3-5+ years of experience, proficiency in SQL and Azure services, strong UNIX/Linux skills, and Python. Experience with data processing frameworks is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 21, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Oak Brook, IL
-
π§ - Skills detailed
#SQL (Structured Query Language) #Kubernetes #ADF (Azure Data Factory) #Computer Science #Data Processing #"ETL (Extract #Transform #Load)" #Data Science #Scala #Azure cloud #Terraform #Azure #Shell Scripting #Spark (Apache Spark) #Data Pipeline #Data Engineering #Programming #Python #Documentation #Azure SQL #Oracle #ML (Machine Learning) #SQL Server #PostgreSQL #Linux #BI (Business Intelligence) #Scripting #MySQL #Databricks #Synapse #Docker #Azure Data Factory #Cloud #Databases #Unix #Kafka (Apache Kafka)
Role description
Title: Data Engineer
Location: Hybrid in Oak Brook, IL
Terms of Employment:
β’ W2 Contract, 12 Months (Likely Extension)
β’ This is a hybrid position. The selected candidate must be comfortable working onsite on Tuesdays, Wednesdays, and Thursdays in Oak Brook, IL.
Overview & Responsibilities:
Work with a leading professional services firm within their Grid Analytics team. As a Mid-Level Data Engineer, you will be the backbone of data operations, designing and maintaining the pipelines that empower data scientists to drive business intelligence. You will play a critical role in bridging data across high-performance computing (HPC) servers and the Azure cloud, ensuring data is reliable, optimized, and accessible for complex machine learning lifecycles. You willβ¦
β’ Design and implement scalable ETL/ELT data pipelines in hybrid (on-prem and cloud) environments.
β’ Build robust integration solutions to connect various data sources across Azure and on-premises systems.
β’ Configure and optimize databases in UNIX/Linux and cloud platforms.
β’ Implement MLOps practices to support the machine learning lifecycle.
β’ Collaborate with data scientists and stakeholders to understand and meet data requirements.
β’ Maintain documentation for infrastructure, pipelines, and workflows.
Required Qualifications:
β’ 3 to 5+ years of experience in Data Engineering.
β’ Bachelorβs degree in Computer Science, Information Systems, or a related technical field.
β’ Proficiency in SQL and experience with major databases (PostgreSQL, SQL Server, Oracle, or MySQL).
β’ Hands-on experience with Azure cloud data services (Azure Data Factory, Synapse Analytics, Azure SQL).
β’ Strong UNIX/Linux administration skills and Python programming/shell scripting proficiency.
β’ Experience with data processing frameworks such as Spark or Databricks.
Preferred Qualifications:
β’ Experience with real-time data processing (Kafka, Event Hubs).
β’ Knowledge of Infrastructure-as-Code (Terraform, ARM templates).
β’ Familiarity with containerization (Docker, Kubernetes).
β’ Background in implementing CI/CD for data solutions.
Title: Data Engineer
Location: Hybrid in Oak Brook, IL
Terms of Employment:
β’ W2 Contract, 12 Months (Likely Extension)
β’ This is a hybrid position. The selected candidate must be comfortable working onsite on Tuesdays, Wednesdays, and Thursdays in Oak Brook, IL.
Overview & Responsibilities:
Work with a leading professional services firm within their Grid Analytics team. As a Mid-Level Data Engineer, you will be the backbone of data operations, designing and maintaining the pipelines that empower data scientists to drive business intelligence. You will play a critical role in bridging data across high-performance computing (HPC) servers and the Azure cloud, ensuring data is reliable, optimized, and accessible for complex machine learning lifecycles. You willβ¦
β’ Design and implement scalable ETL/ELT data pipelines in hybrid (on-prem and cloud) environments.
β’ Build robust integration solutions to connect various data sources across Azure and on-premises systems.
β’ Configure and optimize databases in UNIX/Linux and cloud platforms.
β’ Implement MLOps practices to support the machine learning lifecycle.
β’ Collaborate with data scientists and stakeholders to understand and meet data requirements.
β’ Maintain documentation for infrastructure, pipelines, and workflows.
Required Qualifications:
β’ 3 to 5+ years of experience in Data Engineering.
β’ Bachelorβs degree in Computer Science, Information Systems, or a related technical field.
β’ Proficiency in SQL and experience with major databases (PostgreSQL, SQL Server, Oracle, or MySQL).
β’ Hands-on experience with Azure cloud data services (Azure Data Factory, Synapse Analytics, Azure SQL).
β’ Strong UNIX/Linux administration skills and Python programming/shell scripting proficiency.
β’ Experience with data processing frameworks such as Spark or Databricks.
Preferred Qualifications:
β’ Experience with real-time data processing (Kafka, Event Hubs).
β’ Knowledge of Infrastructure-as-Code (Terraform, ARM templates).
β’ Familiarity with containerization (Docker, Kubernetes).
β’ Background in implementing CI/CD for data solutions.






