

CTC
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Cincinnati, OH, lasting 12+ months with a pay rate of "unknown." Key skills include Azure, Databricks, Spark, Python, SQL, and data governance. Experience in enterprise data platforms and specific domains is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 20, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#Data Modeling #Ansible #Spark (Apache Spark) #NoSQL #Palantir Foundry #Data Engineering #Delta Lake #Data Pipeline #Scala #Automation #Azure #Databricks #SQL (Structured Query Language) #Jenkins #Consulting #GitHub #Azure cloud #Cloud #Distributed Computing #PySpark #Python #Terraform #Puppet #Data Governance
Role description
Job title: Data Engineer
Location: Cincinnati, OH
Duration: 12+ Months
Schedule: Mon - Friday 8am - 5pm EST
Job Summary
Our Data Engineers design, build, and optimize scalable data solutions that support enterprise analytics, operations, and strategic initiatives. These roles focus on modern data engineering practices, including Azure cloud services, Databricks, Spark, Python, SQL, automation, CI/CD, and data governance. They also collaborate across technical and business teams to deliver reliable, secure, and innovative data products. Some assignments may support specialized platforms such as Palantir Foundry or specific business domains (e.g., sourcing, transportation, or supplier data).
Required Skills
• 2–7+ years of experience in data engineering, analytics, platform integration, or related fields
• Strong hands-on experience with:
• Azure (data services, Functions, cloud integration)
• Databricks, Spark/PySpark, Delta Lake, SQL
• Python and distributed computing concepts
• Experience designing and supporting modern data pipelines and architectures
• Strong SQL and database background (SQL/NoSQL)
• Experience with orchestration or message-based systems
• Experience with Terraform, GitHub, GitHub Actions, and CI/CD workflows
• Experience with automation tools or frameworks (e.g., Ansible, Jenkins, Puppet)
• Ability to monitor and optimize Databricks clusters and workflows
• Familiarity with cataloging and lineage tools (Purview, Unity Catalog)
• Understanding of data modeling, ingestion, integration patterns, and SDLC
• Ability to design scalable, secure, high-quality data solutions
• Strong analytical, problem-solving, and communication skills
• Ability to work independently, stay organized, and manage multiple priorities
• Experience collaborating with technical and business stakeholders
• Ability to mentor junior team members
• Experience with Palantir Foundry or similar enterprise data platforms
• Consulting or advisory background
• Familiarity with supplier, sourcing, or transportation data domains
Job title: Data Engineer
Location: Cincinnati, OH
Duration: 12+ Months
Schedule: Mon - Friday 8am - 5pm EST
Job Summary
Our Data Engineers design, build, and optimize scalable data solutions that support enterprise analytics, operations, and strategic initiatives. These roles focus on modern data engineering practices, including Azure cloud services, Databricks, Spark, Python, SQL, automation, CI/CD, and data governance. They also collaborate across technical and business teams to deliver reliable, secure, and innovative data products. Some assignments may support specialized platforms such as Palantir Foundry or specific business domains (e.g., sourcing, transportation, or supplier data).
Required Skills
• 2–7+ years of experience in data engineering, analytics, platform integration, or related fields
• Strong hands-on experience with:
• Azure (data services, Functions, cloud integration)
• Databricks, Spark/PySpark, Delta Lake, SQL
• Python and distributed computing concepts
• Experience designing and supporting modern data pipelines and architectures
• Strong SQL and database background (SQL/NoSQL)
• Experience with orchestration or message-based systems
• Experience with Terraform, GitHub, GitHub Actions, and CI/CD workflows
• Experience with automation tools or frameworks (e.g., Ansible, Jenkins, Puppet)
• Ability to monitor and optimize Databricks clusters and workflows
• Familiarity with cataloging and lineage tools (Purview, Unity Catalog)
• Understanding of data modeling, ingestion, integration patterns, and SDLC
• Ability to design scalable, secure, high-quality data solutions
• Strong analytical, problem-solving, and communication skills
• Ability to work independently, stay organized, and manage multiple priorities
• Experience collaborating with technical and business stakeholders
• Ability to mentor junior team members
• Experience with Palantir Foundry or similar enterprise data platforms
• Consulting or advisory background
• Familiarity with supplier, sourcing, or transportation data domains






