

Mindlance
Data Engineer (HPC + Azure)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Mid-Level Data Engineer (HPC + Azure) with a contract length of "unknown" and a pay rate of "unknown." Requires 3–5 years of experience, strong UNIX/Linux skills, Azure data stack proficiency, and a Bachelor's degree in a technical field.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
592
-
🗓️ - Date
February 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Villa Park, IL
-
🧠 - Skills detailed
#Databases #Containers #Azure SQL #Kubernetes #Scala #Cloud #Docker #Spark (Apache Spark) #Batch #Terraform #Databricks #Scripting #Python #HDFS (Hadoop Distributed File System) #Data Science #BI (Business Intelligence) #Data Architecture #Kafka (Apache Kafka) #Linux #SQL (Structured Query Language) #ADF (Azure Data Factory) #Data Pipeline #Azure #ML (Machine Learning) #Unix #"ETL (Extract #Transform #Load)" #Synapse #Automation #Data Processing #Data Engineering
Role description
🚀 Hiring: Mid-Level Data Engineer (Hybrid – HPC + Azure)
We’re looking for a talented Data Engineer who thrives at the intersection of high-performance computing and cloud data platforms. If you enjoy building scalable data pipelines and working in hybrid environments, this role is for you!
🔹 What You’ll Do:
• Design and build scalable ETL/ELT pipelines across on-prem HPC and Azure
• Develop hybrid data architectures supporting analytics, ML, and BI
• Optimize databases and data stores in Linux/UNIX + cloud environments
• Implement MLOps practices and modern data flow architectures (batch + real-time)
• Collaborate with data scientists and ML engineers on end-to-end data solutions
🔹 Must-Have Skills:
✔ 3–5 years of data engineering experience
✔ Strong UNIX/Linux command-line expertise
✔ Azure data stack (ADF, Synapse, Azure SQL)
✔ Python, SQL, and scripting for automation
✔ Experience with Spark/Databricks, containers (Docker/Kubernetes)
🔹 Nice to Have:
✨ Kafka/Event Hubs, Terraform, distributed file systems (HDFS/Lustre)
✨ Real-time data processing, CI/CD for data, hybrid cloud networking
🎓 Bachelor’s degree in a technical field required (Azure or data certs a plus)
📩 Interested or know someone who fits?
Drop a comment or DM me!
#Hiring #DataEngineer #Azure #DataEngineering #HybridCloud #HPC #TechJobs
🚀 Hiring: Mid-Level Data Engineer (Hybrid – HPC + Azure)
We’re looking for a talented Data Engineer who thrives at the intersection of high-performance computing and cloud data platforms. If you enjoy building scalable data pipelines and working in hybrid environments, this role is for you!
🔹 What You’ll Do:
• Design and build scalable ETL/ELT pipelines across on-prem HPC and Azure
• Develop hybrid data architectures supporting analytics, ML, and BI
• Optimize databases and data stores in Linux/UNIX + cloud environments
• Implement MLOps practices and modern data flow architectures (batch + real-time)
• Collaborate with data scientists and ML engineers on end-to-end data solutions
🔹 Must-Have Skills:
✔ 3–5 years of data engineering experience
✔ Strong UNIX/Linux command-line expertise
✔ Azure data stack (ADF, Synapse, Azure SQL)
✔ Python, SQL, and scripting for automation
✔ Experience with Spark/Databricks, containers (Docker/Kubernetes)
🔹 Nice to Have:
✨ Kafka/Event Hubs, Terraform, distributed file systems (HDFS/Lustre)
✨ Real-time data processing, CI/CD for data, hybrid cloud networking
🎓 Bachelor’s degree in a technical field required (Azure or data certs a plus)
📩 Interested or know someone who fits?
Drop a comment or DM me!
#Hiring #DataEngineer #Azure #DataEngineering #HybridCloud #HPC #TechJobs





