Golden Technology

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, a 6-month C2H position based in Cincinnati, OH (Remote), offering a competitive pay rate. Key skills include Azure Data Lake, Databricks, SQL, and experience in Supply Chain data strategy.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
800
-
πŸ—“οΈ - Date
April 23, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Cincinnati Metropolitan Area
-
🧠 - Skills detailed
#Scala #SQL (Structured Query Language) #Spark (Apache Spark) #Data Catalog #PySpark #Microsoft Power BI #Data Engineering #Alation #BI (Business Intelligence) #GitHub #Data Lake #Agile #Azure #Cloud #Data Strategy #Scrum #Azure Databricks #Python #Databricks #Strategy #Data Mapping #Terraform #Migration
Role description
β€’ β€’ This C2H Position Preferred W2 Contractors Job Title: Senior Data Engineer Location: Cincinnati, OH (Remote) Type: 6 Months C2H Job Description With sales of over $100 B, the capability of delivering critical data and building cloud first solutions our teams offer are one of the most critical capabilities of the company. We’re looking for individuals who can bring their core set of knowledge as well as learn new tools to provide new data and capabilities to support ever growing Supply Chain. β€’ Accountable for developing and delivering technological responses to targeted business outcomes. β€’ Analyze, design and develop enterprise data and information architecture deliverables, focusing on data as an asset for Supply Chain and the overall enterprise. β€’ Understand and follow reusable standards, design patterns, guidelines, and configurations to deliver valuable data and information across the enterprise, including direct collaboration with where needed. β€’ Demonstrate the company’s core values of respect, honesty, integrity, diversity, inclusion and safety. Key Responsibilities β€’ Create and leverage Databricks notebooks to source, shape and store data using SQL, Python, PySpark β€’ Utilize enterprise standards for data domains and data solutions, focusing on simplified integration and streamlined operational and analytical uses β€’ Ensure there is clarity between ongoing projects, escalating when necessary, including direct collaboration. β€’ Define high-level migration plans to address the gaps between the current and future state β€’ Analyze technology environments to detect critical deficiencies and recommend solutions for improvement β€’ Promote the reuse of data assets, including the management of the data catalog for reference Top Skills β€’ Azure Data Lake (Gen 2) β€’ Azure Databricks β€’ SQL β€’ Unity Catalog β€’ Alation β€’ Power BI β€’ Terraform β€’ GitHub Actions Project – agile scrum practices, joining a warehouse data engineering team – warehouse data domain, Implementing Manhattan which is a 2-5 year project, Data mapping to set up data domain Supply Chain Data Strategy and Cloud Operation are core to the and it operations.