

Phaxis
Databricks Technical Lead/Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Technical Lead/Engineer on a 7-month contract, hybrid in Chicago, IL. Required skills include 5+ years of Databricks and Spark experience, expertise in Delta Lake, and knowledge of AWS services.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 18, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Cloud #YAML (YAML Ain't Markup Language) #AWS (Amazon Web Services) #Airflow #Data Pipeline #Delta Lake #PySpark #Leadership #Metadata #Spark SQL #SQL (Structured Query Language) #Spark (Apache Spark) #Data Engineering #"ETL (Extract #Transform #Load)" #Databricks #IAM (Identity and Access Management) #JSON (JavaScript Object Notation) #S3 (Amazon Simple Storage Service)
Role description
Databricks Technical Lead- Chicago IL
7-month contract with possible extension, hybrid 3 days onsite
Role description
We're looking for a Databricks Technical Lead who can guide the design and build-out of our data engineering and transformation platforms. This person will be the go-to expert on Databricks, Delta Lake, and Spark, and will help shape how data flows across our organization — from ingestion all the way through our curated layers.
This is a hands-on leadership role. You won't just review work — you'll help solve hard problems, mentor engineers, set standards, and make architecture decisions that will influence the platform for years.
What You'll Do
• Lead the technical direction for Databricks-based data pipelines and frameworks.
• Design and review patterns for ingesting, transforming, and publishing data (Bronze → Silver → Gold).
• Define best practices around Delta Lake, schema evolution, SCD handling, and metadata-driven transformations.
• Provide technical oversight across multiple engineering squads.
• Work with architects, data modelers, quality engineers, and operations teams to ensure pipelines are built the right way.
• Mentor data engineers and help elevate the overall engineering capability.
• Oversee Unity Catalog governance, including RBAC, lineage, and schema enforcement.
• Help troubleshoot complex performance issues and guide teams on tuning and optimization.
• Support integration with orchestration tools and CI/CD processes.
What You Bring
• Several years of hands-on Spark experience and deep expertise with Databricks (Workflows, Delta Lake, Repos, Unity Catalog).
• Strong understanding of data engineering patterns, especially medallion architecture.
• Comfortable designing metadata-driven systems using YAML/JSON or similar configuration styles.
• Solid knowledge of AWS data services (S3, Glue, IAM, or equivalents).
• Experience with structured streaming and Auto Loader is a plus.
• Ability to lead and mentor engineers, give constructive feedback, and set engineering standards.
• Strong communication skills — able to explain complex ideas in a clear and approachable way.
Must-Have Skills
• 5+ years Databricks experience at a senior/lead level
• Strong Spark (PySpark + Spark SQL), not just SQL users
• Deep understanding of Delta Lake (OPTIMIZE, VACUUM, compaction, file layout)
• Designed or owned a medallion architecture
• Experience with schema evolution & SCD Type 1/2 handling
• Hands-on Unity Catalog experience (permissions, lineage, governance)
• Built or maintained metadata-driven frameworks
• Streaming experience (Auto Loader / Structured Streaming)
• Experience with Airflow, Glue, or Databricks Workflows
• Working knowledge of cloud services (AWS)
Databricks Technical Lead- Chicago IL
7-month contract with possible extension, hybrid 3 days onsite
Role description
We're looking for a Databricks Technical Lead who can guide the design and build-out of our data engineering and transformation platforms. This person will be the go-to expert on Databricks, Delta Lake, and Spark, and will help shape how data flows across our organization — from ingestion all the way through our curated layers.
This is a hands-on leadership role. You won't just review work — you'll help solve hard problems, mentor engineers, set standards, and make architecture decisions that will influence the platform for years.
What You'll Do
• Lead the technical direction for Databricks-based data pipelines and frameworks.
• Design and review patterns for ingesting, transforming, and publishing data (Bronze → Silver → Gold).
• Define best practices around Delta Lake, schema evolution, SCD handling, and metadata-driven transformations.
• Provide technical oversight across multiple engineering squads.
• Work with architects, data modelers, quality engineers, and operations teams to ensure pipelines are built the right way.
• Mentor data engineers and help elevate the overall engineering capability.
• Oversee Unity Catalog governance, including RBAC, lineage, and schema enforcement.
• Help troubleshoot complex performance issues and guide teams on tuning and optimization.
• Support integration with orchestration tools and CI/CD processes.
What You Bring
• Several years of hands-on Spark experience and deep expertise with Databricks (Workflows, Delta Lake, Repos, Unity Catalog).
• Strong understanding of data engineering patterns, especially medallion architecture.
• Comfortable designing metadata-driven systems using YAML/JSON or similar configuration styles.
• Solid knowledge of AWS data services (S3, Glue, IAM, or equivalents).
• Experience with structured streaming and Auto Loader is a plus.
• Ability to lead and mentor engineers, give constructive feedback, and set engineering standards.
• Strong communication skills — able to explain complex ideas in a clear and approachable way.
Must-Have Skills
• 5+ years Databricks experience at a senior/lead level
• Strong Spark (PySpark + Spark SQL), not just SQL users
• Deep understanding of Delta Lake (OPTIMIZE, VACUUM, compaction, file layout)
• Designed or owned a medallion architecture
• Experience with schema evolution & SCD Type 1/2 handling
• Hands-on Unity Catalog experience (permissions, lineage, governance)
• Built or maintained metadata-driven frameworks
• Streaming experience (Auto Loader / Structured Streaming)
• Experience with Airflow, Glue, or Databricks Workflows
• Working knowledge of cloud services (AWS)






