

Data Engineer II / III
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer II/III with a contract length of over 6 months, offering $55/hr in Cincinnati, OH. Requires 5+ years in data engineering, 2+ years with Azure services, and proficiency in Python, SQL, and ETL design.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
440
🗓️ - Date discovered
May 4, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Cincinnati, OH
🧠 - Skills detailed
#Cloud #Synapse #Azure #Data Architecture #Databases #Data Lake #Spark (Apache Spark) #ADF (Azure Data Factory) #Python #Data Modeling #Security #ML (Machine Learning) #Azure cloud #Data Engineering #Programming #SQL (Structured Query Language) #Scala #AI (Artificial Intelligence) #Azure Data Factory #Data Science #"ETL (Extract #Transform #Load)" #Microsoft Azure #Storage #Azure ADLS (Azure Data Lake Storage) #SQL Queries #Databricks #Data Quality #PySpark #GIT #ADLS (Azure Data Lake Storage) #Azure Synapse Analytics #Data Pipeline #DevOps
Role description
Position Title: Data Engineer II / IIIJob Type: Onsite or Hybrid depending on client requirementsJob Type: Full-Time / Contract-to-HireCompensation: $55/hr W2
Position Summary:
A dynamic and experienced Azure Data Engineer is needed to join a forward-thinking data team. This role involves creating robust, scalable data solutions using the Microsoft Azure ecosystem. The ideal candidate will play a crucial role in shaping data architecture, designing ETL workflows, and enabling data-driven decision-making by supporting analytics and machine learning efforts.
Qualifications:
A minimum of 5 years of experience in data engineering or a closely related role
At least 2 years of hands-on experience with Microsoft Azure cloud services
Proficient in Azure Data Factory, Databricks, Azure Synapse Analytics, and Azure Data Lake Storage
Strong programming skills in Python and PySpark
Advanced SQL capabilities and a solid grasp of data modeling concepts
Familiarity with cloud infrastructure, distributed systems, and data warehousing
Exposure to CI/CD pipelines, DevOps practices, and tools like Git is a plus
Excellent communication and team collaboration skills
Key Responsibilities:
Design and implement scalable, efficient ETL pipelines using Azure-native tools
Build cloud-based data platforms and infrastructure with Azure Synapse, Data Lake, and SQL databases
Develop optimized SQL queries, views, and stored procedures for analytics and reporting
Clean, transform, and manage both structured and unstructured data for downstream use
Monitor, debug, and enhance data pipeline performance and reliability
Enforce best practices around data quality, governance, and security
Collaborate with analysts and data scientists to support AI/ML initiatives
Document solutions and contribute to internal knowledge sharing and technical standards
Job Types: Full-time, Contract
Pay: From $55.00 per hour
Ability to Commute:
Cincinnati, OH (Required)
Ability to Relocate:
Cincinnati, OH: Relocate before starting work (Required)
Work Location: On the road
Position Title: Data Engineer II / IIIJob Type: Onsite or Hybrid depending on client requirementsJob Type: Full-Time / Contract-to-HireCompensation: $55/hr W2
Position Summary:
A dynamic and experienced Azure Data Engineer is needed to join a forward-thinking data team. This role involves creating robust, scalable data solutions using the Microsoft Azure ecosystem. The ideal candidate will play a crucial role in shaping data architecture, designing ETL workflows, and enabling data-driven decision-making by supporting analytics and machine learning efforts.
Qualifications:
A minimum of 5 years of experience in data engineering or a closely related role
At least 2 years of hands-on experience with Microsoft Azure cloud services
Proficient in Azure Data Factory, Databricks, Azure Synapse Analytics, and Azure Data Lake Storage
Strong programming skills in Python and PySpark
Advanced SQL capabilities and a solid grasp of data modeling concepts
Familiarity with cloud infrastructure, distributed systems, and data warehousing
Exposure to CI/CD pipelines, DevOps practices, and tools like Git is a plus
Excellent communication and team collaboration skills
Key Responsibilities:
Design and implement scalable, efficient ETL pipelines using Azure-native tools
Build cloud-based data platforms and infrastructure with Azure Synapse, Data Lake, and SQL databases
Develop optimized SQL queries, views, and stored procedures for analytics and reporting
Clean, transform, and manage both structured and unstructured data for downstream use
Monitor, debug, and enhance data pipeline performance and reliability
Enforce best practices around data quality, governance, and security
Collaborate with analysts and data scientists to support AI/ML initiatives
Document solutions and contribute to internal knowledge sharing and technical standards
Job Types: Full-time, Contract
Pay: From $55.00 per hour
Ability to Commute:
Cincinnati, OH (Required)
Ability to Relocate:
Cincinnati, OH: Relocate before starting work (Required)
Work Location: On the road