

nTech Workforce
Data Engineer -- W2 ONLY
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 12-month W2 contract, onsite in Oakbrook Terrace, IL. Requires 3-5 years of data engineering experience, proficiency in SQL, Azure services, and UNIX/Linux, along with relevant certifications.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Oakbrook Terrace, IL
-
🧠 - Skills detailed
#Azure Data Factory #Databases #Programming #Azure SQL #Kubernetes #Scala #Cloud #PostgreSQL #Compliance #Docker #Monitoring #Spark (Apache Spark) #Batch #Data Integration #Lambda (AWS Lambda) #Security #Data Quality #Terraform #Databricks #Scripting #Documentation #HDFS (Hadoop Distributed File System) #Data Science #BI (Business Intelligence) #Data Architecture #Kafka (Apache Kafka) #Linux #Python #SQL (Structured Query Language) #Oracle #Azure cloud #Database Systems #Storage #Data Governance #SQL Server #ADF (Azure Data Factory) #Data Pipeline #Azure #Shell Scripting #ML (Machine Learning) #Graph Databases #Unix #"ETL (Extract #Transform #Load)" #Synapse #MySQL #Automation #Data Processing #Data Engineering #Computer Science
Role description
Title: Data Engineer
Location: Onsite in Oakbrook Terrace, IL
Terms of Employment
• W2 Contract, 12 months
• The office is located in Oakbrook Terrace, IL.
Overview
Our Client is seeking a mid-level Data Engineer is responsible for designing, building, and maintaining the data infrastructure and pipelines that enable efficient data processing, storage, and analysis within a hybrid computing environment. This role bridges the gap between data collection and data consumption across both high-performance computing (HPC) servers and Azure cloud platforms, ensuring that data is accessible, reliable, and optimized for analytics, machine learning, and business intelligence purposes.
Responsibilities
• Design, develop, and implement scalable data pipelines using modern ETL/ELT frameworks and tools in hybrid environments
• Create and maintain data architectures that span high-performance computing servers and Azure cloud services
• Build robust data integration solutions to connect various data sources across on-premises and cloud systems
• Configure and optimize databases and data stores in both UNIX/Linux environments and cloud platforms
• Implement MLOps practices to support the machine learning lifecycle from development to production
• Design and implement modern data flow architectures that support both batch and real-time processing needs
• Apply performance tuning techniques for high-volume data processing in HPC environments
• Collaborate with data scientists, ML engineers, and business stakeholders to understand data requirements
• Develop and maintain documentation for data infrastructure, pipelines, and workflows
• Implement data quality monitoring and validation across hybrid environments
• Mentor junior data engineers on best practices for hybrid cloud/on-premises architectures
Required Skills & Experience
• Bachelor's degree in Computer Science, Information Systems, or related technical field
• Relevant certifications in Azure, data technologies, or HPC administration
• 3-5 years of experience in data engineering or related field
• Strong UNIX/Linux administration skills and command-line proficiency
• Experience with high-performance computing environments and workload management
• Proficiency in SQL and experience with major database systems (PostgreSQL, SQL Server, Oracle, MySQL)
• Hands-on experience with Azure cloud data services (Azure Data Factory, Synapse Analytics, Azure SQL)
• Knowledge of modern data flow architectures (Lambda, Kappa, Delta) and implementation patterns
• Experience with ML flow frameworks and ML lifecycle management tools
• Programming skills in Python and shell scripting for automation and data processing
• Familiarity with containerization (Docker) and orchestration (Kubernetes)
• Experience with data processing frameworks (Spark, Databricks, Azure Batch)
• Understanding of networking concepts for data movement between on-premises and cloud
Preferred Skills & Experiment
• Experience with real-time data processing and streaming technologies (Kafka, Event Hubs)
• Knowledge of infrastructure-as-code tools (Terraform, ARM templates)
• Experience with distributed file systems (HDFS, Lustre) and object storage
• Familiarity with graph databases and specialized analytics datastores
• Understanding of data governance, security, and compliance requirements across hybrid environments
• Experience with optimization techniques for large-scale data transfers between systems
• Knowledge of GPU acceleration for data processing pipelines
• Background in implementing CI/CD for data solutions
Title: Data Engineer
Location: Onsite in Oakbrook Terrace, IL
Terms of Employment
• W2 Contract, 12 months
• The office is located in Oakbrook Terrace, IL.
Overview
Our Client is seeking a mid-level Data Engineer is responsible for designing, building, and maintaining the data infrastructure and pipelines that enable efficient data processing, storage, and analysis within a hybrid computing environment. This role bridges the gap between data collection and data consumption across both high-performance computing (HPC) servers and Azure cloud platforms, ensuring that data is accessible, reliable, and optimized for analytics, machine learning, and business intelligence purposes.
Responsibilities
• Design, develop, and implement scalable data pipelines using modern ETL/ELT frameworks and tools in hybrid environments
• Create and maintain data architectures that span high-performance computing servers and Azure cloud services
• Build robust data integration solutions to connect various data sources across on-premises and cloud systems
• Configure and optimize databases and data stores in both UNIX/Linux environments and cloud platforms
• Implement MLOps practices to support the machine learning lifecycle from development to production
• Design and implement modern data flow architectures that support both batch and real-time processing needs
• Apply performance tuning techniques for high-volume data processing in HPC environments
• Collaborate with data scientists, ML engineers, and business stakeholders to understand data requirements
• Develop and maintain documentation for data infrastructure, pipelines, and workflows
• Implement data quality monitoring and validation across hybrid environments
• Mentor junior data engineers on best practices for hybrid cloud/on-premises architectures
Required Skills & Experience
• Bachelor's degree in Computer Science, Information Systems, or related technical field
• Relevant certifications in Azure, data technologies, or HPC administration
• 3-5 years of experience in data engineering or related field
• Strong UNIX/Linux administration skills and command-line proficiency
• Experience with high-performance computing environments and workload management
• Proficiency in SQL and experience with major database systems (PostgreSQL, SQL Server, Oracle, MySQL)
• Hands-on experience with Azure cloud data services (Azure Data Factory, Synapse Analytics, Azure SQL)
• Knowledge of modern data flow architectures (Lambda, Kappa, Delta) and implementation patterns
• Experience with ML flow frameworks and ML lifecycle management tools
• Programming skills in Python and shell scripting for automation and data processing
• Familiarity with containerization (Docker) and orchestration (Kubernetes)
• Experience with data processing frameworks (Spark, Databricks, Azure Batch)
• Understanding of networking concepts for data movement between on-premises and cloud
Preferred Skills & Experiment
• Experience with real-time data processing and streaming technologies (Kafka, Event Hubs)
• Knowledge of infrastructure-as-code tools (Terraform, ARM templates)
• Experience with distributed file systems (HDFS, Lustre) and object storage
• Familiarity with graph databases and specialized analytics datastores
• Understanding of data governance, security, and compliance requirements across hybrid environments
• Experience with optimization techniques for large-scale data transfers between systems
• Knowledge of GPU acceleration for data processing pipelines
• Background in implementing CI/CD for data solutions






