

Golden Technology
Senior Data Engineer-C2H (W2)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (C2H, W2) with a contract length of unspecified duration, offering a competitive pay rate. Remote work is available. Requires 3+ years in data development, SQL/NoSQL, Python, and experience with Databricks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 16, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Cincinnati Metropolitan Area
-
π§ - Skills detailed
#Data Quality #Compliance #Mathematics #Databricks #Scrum #Data Pipeline #MIS Systems (Management Information Systems) #Documentation #Kafka (Apache Kafka) #Data Engineering #Docker #NoSQL #"ETL (Extract #Transform #Load)" #Python #Scala #Agile #PySpark #Data Ingestion #Kubernetes #SaaS (Software as a Service) #SQL (Structured Query Language) #Data Bricks #GIT #Spark (Apache Spark) #Automated Testing #Version Control #Leadership #GitHub #Computer Science #Datasets #Cloud #Snowflake
Role description
β’
β’
β’
β’ This is W2 - C2H position | No third part C2C Vendor please
β’
β’
β’ We are currently seeking a Sr Data Engineer for a Direct Client contract Remote Role. Please see details below:
Job Description:
Requirements:
β’ Bachelorβs degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another STEM degree.
β’ Proven experience working within cross-functional teams and managing complex projects from inception to completion
β’ 3+ years of professional Data Development experience.
β’ 3+ years of experience with SQL and NoSQL technologies.
β’ 2+ years of experience building and maintaining data pipelines and workflows.
β’ 2+ years of experience developing with Python & Pyspark.
β’ Experience developing within databricks.
β’ Experience with CI/CD pipelines and processes.
β’ Experience with automated unit, integration, and performance testing.
β’ Experience with version control software such as Git.
β’ Full understanding of ETL and Data Warehousing concepts.
β’ Strong understanding of Agile principles (Scrum).
Preferred Qualifications
β’ Experience with Snowflake.
β’ Experience in building out marketing cleanrooms.
β’ Knowledge of Structured Streaming (Spark, Kafka, EventHub, or similar technologies).
β’ Experience with GitHub SaaS/GitHub Actions.
β’ Experience with Service Oriented Architecture.
β’ Experience with containerization technologies such as Docker and Kubernetes.
Responsibilities
Take ownership of systems, processes, and the tech stack while driving features to completion through all phases of the entire SDLC. This includes internal and external facing applications as well as process improvement activities:
β’ Provide Technical Leadership: Offer technical leadership to ensure clarity between ongoing projects and facilitate collaboration across teams to solve complex data engineering challenges.
β’ Build and Maintain Data Pipelines: Design, build, and maintain scalable, efficient, and reliable data pipelines to support data ingestion, transformation, and integration across diverse sources and destinations, using tools such as Kafka, Databricks, and similar toolsets.
β’ Drive Innovation: Leverage innovative technologies and approaches to modernize and extend core data assets, including SQL-based, NoSQL-based, cloud-based, and real-time streaming data platforms.
β’ Implement Automated Testing: Design and implement automated unit, integration, and performance testing frameworks to ensure data quality, reliability, and compliance with organizational standards.
β’ Optimize Data Workflows: Optimize data workflows for performance, cost efficiency, and scalability across large datasets and complex environments.
β’ Mentor Team Members: Mentor team members in data principles, patterns, processes, and practices to promote best practices and improve team capabilities.
β’ Draft and Review Documentation: Draft and review architectural diagrams, interface specifications, and other design documents to ensure clear communication of data solutions and technical requirements.
Okay operating independently, pyspark, python, cleanroom experience, data bricks experience
β’
β’
β’
β’ This is W2 - C2H position | No third part C2C Vendor please
β’
β’
β’ We are currently seeking a Sr Data Engineer for a Direct Client contract Remote Role. Please see details below:
Job Description:
Requirements:
β’ Bachelorβs degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another STEM degree.
β’ Proven experience working within cross-functional teams and managing complex projects from inception to completion
β’ 3+ years of professional Data Development experience.
β’ 3+ years of experience with SQL and NoSQL technologies.
β’ 2+ years of experience building and maintaining data pipelines and workflows.
β’ 2+ years of experience developing with Python & Pyspark.
β’ Experience developing within databricks.
β’ Experience with CI/CD pipelines and processes.
β’ Experience with automated unit, integration, and performance testing.
β’ Experience with version control software such as Git.
β’ Full understanding of ETL and Data Warehousing concepts.
β’ Strong understanding of Agile principles (Scrum).
Preferred Qualifications
β’ Experience with Snowflake.
β’ Experience in building out marketing cleanrooms.
β’ Knowledge of Structured Streaming (Spark, Kafka, EventHub, or similar technologies).
β’ Experience with GitHub SaaS/GitHub Actions.
β’ Experience with Service Oriented Architecture.
β’ Experience with containerization technologies such as Docker and Kubernetes.
Responsibilities
Take ownership of systems, processes, and the tech stack while driving features to completion through all phases of the entire SDLC. This includes internal and external facing applications as well as process improvement activities:
β’ Provide Technical Leadership: Offer technical leadership to ensure clarity between ongoing projects and facilitate collaboration across teams to solve complex data engineering challenges.
β’ Build and Maintain Data Pipelines: Design, build, and maintain scalable, efficient, and reliable data pipelines to support data ingestion, transformation, and integration across diverse sources and destinations, using tools such as Kafka, Databricks, and similar toolsets.
β’ Drive Innovation: Leverage innovative technologies and approaches to modernize and extend core data assets, including SQL-based, NoSQL-based, cloud-based, and real-time streaming data platforms.
β’ Implement Automated Testing: Design and implement automated unit, integration, and performance testing frameworks to ensure data quality, reliability, and compliance with organizational standards.
β’ Optimize Data Workflows: Optimize data workflows for performance, cost efficiency, and scalability across large datasets and complex environments.
β’ Mentor Team Members: Mentor team members in data principles, patterns, processes, and practices to promote best practices and improve team capabilities.
β’ Draft and Review Documentation: Draft and review architectural diagrams, interface specifications, and other design documents to ensure clear communication of data solutions and technical requirements.
Okay operating independently, pyspark, python, cleanroom experience, data bricks experience






