

Jobs via Dice
Data Engineer- Local Consultants Only OH
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position based in Cincinnati/Blue Ash, OH, on a contract basis. Pay rate is unspecified. Requires 7+ years of experience, expertise in Azure, Databricks, and SQL, along with relevant certifications. On-site work is mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 8, 2025
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Fixed Term
-
π - Security
Unknown
-
π - Location detailed
Cincinnati, OH
-
π§ - Skills detailed
#Azure Databricks #Data Management #Delta Lake #Microsoft Power BI #Cloud #Spark (Apache Spark) #Semantic Models #SQL (Structured Query Language) #Data Ingestion #ML (Machine Learning) #Data Mart #Datasets #Jira #Data Engineering #Data Transformations #"ETL (Extract #Transform #Load)" #Leadership #BI (Business Intelligence) #Data Pipeline #Databricks #Azure #Data Science #Data Architecture #Compliance #Synapse #AI (Artificial Intelligence) #Documentation #Scala #Data Processing #Metadata #Agile #Data Catalog #Alation #Data Governance
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, SSTech LLC, is seeking the following. Apply via Dice today!
Data Engineer
Job Description
Position Overview
This contract role will be part of Customer Experience, supporting the Data & Analytics team.
Candidates will need to be local to Cincinnati/Blue Ash and will be expected to work out of the BTD location (11511 Reed Hartman Hwy, Blue Ash, OH 45241) Monday - Thursday.
We are looking for a highly skilled, hands-on Senior Data Engineer to join our Data & Analytics team.
In this role, you will play a key part in building and scaling Kroger s behavioral and Ecommerce data platform, enabling trusted analytics and making AI-ready data available across the organization.
You will be responsible for designing and implementing robust, scalable data pipelines, modeling complex, high-volume datasets, and delivering high-quality, well-structured data products to power business insights and future AI capabilities.
This is a delivery-focused engineering position demanding deep technical expertise in Azure, Databricks, and Synapse, combined with experience managing large-scale behavioral data in enterprise environments.
Translating business needs into technical specifications and platform capabilities is a critical part of this role.
Required Qualifications
β’ 7+ years of experience in data engineering or similar roles, with hands-on delivery of cloud-based data solutions.
β’ Preferred Engineering certifications: Microsoft Certified: Azure Data Engineer Associate and Databricks: Databricks Certified Data Engineer Professional
β’ Strong expertise in Databricks and Azure Synapse, with practical experience in Spark-based data processing.
β’ Proficient in modern data architectures (Lakehouse, ELT/ETL pipelines, real-time data processing).
β’ Advanced SQL skills for data transformation and performance optimization.
β’ Proven ability to model and manage large-scale, complex behavioral and Ecommerce datasets.
β’ Expert in BI and best practices for data enablement and self-service analytics.
β’ Hands-on experience with Unity Catalog and data cataloging tools (e.g., Alation) for governance and metadata management.
β’ Working knowledge of behavioral analytics platforms (Adobe Analytics, Adobe Customer Journey Analytics).
β’ Excellent communication skills with a talent for translating technical concepts into business value.
β’ Experience operating in agile delivery environments, balancing speed, scalability, and solution quality.
β’ Proven leadership as a technical lead
Key Responsibilities
β’ Design, build, and maintain robust, scalable ELT/ETL pipelines and data transformations using Databricks, Spark, and Synapse.
β’ Model high-volume, complex event-level datasets (digital behavior, Ecommerce transactions, marketing interactions) to support dashboards, experimentation, ML models, and marketing activation.
β’ Enforce data governance, discoverability, and stewardship using Unity Catalog and Alation, ensuring compliance and lineage tracking.
β’ Validate and reconcile data pipelines against established behavioral datasets such as Adobe Customer Journey Analytics (CJA) and Adobe Analytics.
β’ Partner with data architects, analysts, data scientists, and marketing teams to deliver trusted, reusable, and well-structured datasets that power BI dashboards and decision-making.
β’ Mature the data ingestion, processing, orchestration, and curation capabilities leveraging Delta Lake optimization, Databricks Workflows, and Synapse for analytical consumption.
β’ Support and optimize semantic models and data marts that enable self-service analytics through AI/BI Dashboards and Power BI.
β’ Participate in agile delivery processes (sprint planning, backlog refinement, documentation), collaborating through Jira and Confluence.
β’ Document data assets, transformations, and pipelines for discoverability, transparency, and long-term maintainability.
β’ Facilitate clear and continuous communication between business and engineering teams.
Dice is the leading career destination for tech experts at every stage of their careers. Our client, SSTech LLC, is seeking the following. Apply via Dice today!
Data Engineer
Job Description
Position Overview
This contract role will be part of Customer Experience, supporting the Data & Analytics team.
Candidates will need to be local to Cincinnati/Blue Ash and will be expected to work out of the BTD location (11511 Reed Hartman Hwy, Blue Ash, OH 45241) Monday - Thursday.
We are looking for a highly skilled, hands-on Senior Data Engineer to join our Data & Analytics team.
In this role, you will play a key part in building and scaling Kroger s behavioral and Ecommerce data platform, enabling trusted analytics and making AI-ready data available across the organization.
You will be responsible for designing and implementing robust, scalable data pipelines, modeling complex, high-volume datasets, and delivering high-quality, well-structured data products to power business insights and future AI capabilities.
This is a delivery-focused engineering position demanding deep technical expertise in Azure, Databricks, and Synapse, combined with experience managing large-scale behavioral data in enterprise environments.
Translating business needs into technical specifications and platform capabilities is a critical part of this role.
Required Qualifications
β’ 7+ years of experience in data engineering or similar roles, with hands-on delivery of cloud-based data solutions.
β’ Preferred Engineering certifications: Microsoft Certified: Azure Data Engineer Associate and Databricks: Databricks Certified Data Engineer Professional
β’ Strong expertise in Databricks and Azure Synapse, with practical experience in Spark-based data processing.
β’ Proficient in modern data architectures (Lakehouse, ELT/ETL pipelines, real-time data processing).
β’ Advanced SQL skills for data transformation and performance optimization.
β’ Proven ability to model and manage large-scale, complex behavioral and Ecommerce datasets.
β’ Expert in BI and best practices for data enablement and self-service analytics.
β’ Hands-on experience with Unity Catalog and data cataloging tools (e.g., Alation) for governance and metadata management.
β’ Working knowledge of behavioral analytics platforms (Adobe Analytics, Adobe Customer Journey Analytics).
β’ Excellent communication skills with a talent for translating technical concepts into business value.
β’ Experience operating in agile delivery environments, balancing speed, scalability, and solution quality.
β’ Proven leadership as a technical lead
Key Responsibilities
β’ Design, build, and maintain robust, scalable ELT/ETL pipelines and data transformations using Databricks, Spark, and Synapse.
β’ Model high-volume, complex event-level datasets (digital behavior, Ecommerce transactions, marketing interactions) to support dashboards, experimentation, ML models, and marketing activation.
β’ Enforce data governance, discoverability, and stewardship using Unity Catalog and Alation, ensuring compliance and lineage tracking.
β’ Validate and reconcile data pipelines against established behavioral datasets such as Adobe Customer Journey Analytics (CJA) and Adobe Analytics.
β’ Partner with data architects, analysts, data scientists, and marketing teams to deliver trusted, reusable, and well-structured datasets that power BI dashboards and decision-making.
β’ Mature the data ingestion, processing, orchestration, and curation capabilities leveraging Delta Lake optimization, Databricks Workflows, and Synapse for analytical consumption.
β’ Support and optimize semantic models and data marts that enable self-service analytics through AI/BI Dashboards and Power BI.
β’ Participate in agile delivery processes (sprint planning, backlog refinement, documentation), collaborating through Jira and Confluence.
β’ Document data assets, transformations, and pipelines for discoverability, transparency, and long-term maintainability.
β’ Facilitate clear and continuous communication between business and engineering teams.