Ascendum Solutions

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience in Azure, Databricks, Spark, and Python. It offers a hybrid work location, competitive pay, and requires in-person interviews. Candidates must be based in the greater Cincinnati area.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 17, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#Python #SQL (Structured Query Language) #Cloud #Version Control #Databricks #Spark (Apache Spark) #Data Security #Azure Databricks #Deployment #Data Strategy #Strategy #Infrastructure as Code (IaC) #Delta Lake #Monitoring #Scala #Terraform #Azure cloud #Data Catalog #Azure #Security #Data Governance #PySpark #DataOps #Distributed Computing #Automation #Data Pipeline #GitHub #GIT #Data Engineering
Role description
To be considered for this role: • Candidates should be eligible to work for any employer in the United States without needing Visa sponsorship now or in the future • Candidates must be currently located in the greater Cincinnati area • Must be willing to interview in-person and work hybrid on-site Description Seeking a Data Engineer experienced in implementing modern data solutions in Azure, with strong hands-on skills in Databricks, Spark, Python, and cloud-based DataOps practices. The Data Engineer will analyze, design, and develop data products, pipelines, and information architecture deliverables, focusing on data as an enterprise asset. This role also supports cloud infrastructure automation and CI/CD using Terraform, GitHub, and GitHub Actions to deliver scalable, reliable, and secure data solutions. Requirements • 5+ years of experience as a Data Engineer • Hands-on experience with Azure Databricks, Spark, and Python • Experience with Delta Live Tables (DLT) or Databricks SQL • Strong SQL and database background • Experience with Azure Functions, messaging services, or orchestration tools • Familiarity with data governance, lineage, or cataloging tools (e.g., Purview, Unity Catalog) • Experience monitoring and optimizing Databricks clusters or workflows • Experience working with Azure cloud data services and understanding how they integrate with Databricks and enterprise data platforms • Experience with Terraform for cloud infrastructure provisioning • Experience with GitHub and GitHub Actions for version control and CI/CD automation • Strong understanding of distributed computing concepts (partitions, joins, shuffles, cluster behavior) • Familiarity with SDLC and modern engineering practices • Ability to balance multiple priorities, work independently, and stay organized Responsibilities • Analyze, design, and develop enterprise data solutions with a focus on Azure, Databricks, Spark, Python, and SQL • Develop, optimize, and maintain Spark/PySpark data pipelines, including managing performance issues such as data skew, partitioning, caching, and shuffle optimization • \Build and support Delta Lake tables and data models for analytical and operational use cases • Apply reusable design patterns, data standards, and architecture guidelines across the enterprise, including collaboration with sister Company when needed • Use Terraform to provision and manage cloud and Databricks resources, supporting Infrastructure as Code (IaC) practices • Implement and maintain CI/CD workflows using GitHub and GitHub Actions for source control, testing, and pipeline deployment • Manage Git-based workflows for Databricks notebooks, jobs, and data engineering artifacts • Troubleshoot failures and improve reliability across Databricks jobs, clusters, and data pipelines • Apply cloud computing skills to deploy fixes, upgrades, and enhancements in Azure environments • Work closely with engineering teams to enhance tools, systems, development processes, and data security • Participate in the development and communication of data strategy, standards, and roadmaps • Draft architectural diagrams, interface specifications, and other design documents • Promote the reuse of data assets and contribute to enterprise data catalog practices • Deliver timely and effective support and communication to stakeholders and end users • Mentor team members on data engineering principles, best practices, and emerging technologies