Jobs via Dice

Lead Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with 5+ years of experience, focusing on building ETL/ELT pipelines using Databricks, Spark, SQL, and AWS. Contract length is 6+ months, remote or hybrid in Atlanta, U.S. work authorization required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 5, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Cloud #Data Science #Spark (Apache Spark) #Databricks #Data Engineering #AWS S3 (Amazon Simple Storage Service) #Batch #Spark SQL #Data Pipeline #NoSQL #IAM (Identity and Access Management) #S3 (Amazon Simple Storage Service) #Data Quality #Scala #Agile #ML (Machine Learning) #Datasets #AWS (Amazon Web Services) #Snowflake #SQL (Structured Query Language) #Kafka (Apache Kafka) #Delta Lake #BI (Business Intelligence)
Role description
Seeking a Lead Data Engineer to support a modern cloud data platform built on a lakehouse architecture. This role combines hands-on engineering with business collaboration translating requirements into scalable data solutions and owning delivery from design through production support. You ll work cross-functionally to define data needs, design pipelines, and deliver clean, reliable datasets for analytics, reporting, and machine learning. Responsibilities include ingesting and transforming structured and unstructured data, implementing data quality controls, and supporting governance standards. You ll also help optimize data models, ensure operational readiness, and promote engineering best practices. Key work includes building ETL/ELT pipelines, integrating systems using Databricks, Spark, SQL, and AWS, and enabling real-time and batch processing. You ll partner closely with BI and data science teams to deliver scalable solutions aligned with performance, cost, and maintainability goals. Requirements: • 5+ years in data engineering • Strong Databricks experience (Jobs, Spark, Delta Lake, DLT, Unity Catalog) • Solid SQL and Spark skills • Experience with Kafka and AWS (S3, IAM, etc.) • Background in building production data pipelines and working in Agile environments Nice to have: Snowflake, NoSQL, or enterprise messaging tools. Contract role (6+ months, renewable). Remote (EST hours) or hybrid in Atlanta. U.S. work authorization required.