

Sr. Databricks Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Databricks Engineer, offering a contract length of "unknown" and a pay rate of "unknown." Key skills required include Databricks, Delta Lake, Azure, and ETL/ELT pipeline development. A Bachelor's or Master's degree and 5+ years of data engineering experience are essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 12, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Azure Data Factory #Databricks #Strategy #GitHub #Data Quality #Data Modeling #ML (Machine Learning) #Spark (Apache Spark) #Version Control #MLflow #Computer Science #Terraform #Data Architecture #Scala #Jenkins #Delta Lake #Observability #Monitoring #Synapse #SQL (Structured Query Language) #Data Lineage #Data Lake #Infrastructure as Code (IaC) #PySpark #GCP (Google Cloud Platform) #Data Engineering #Data Catalog #Data Pipeline #Data Lakehouse #Data Security #DevOps #Azure DevOps #Airflow #AI (Artificial Intelligence) #Cloud #Security #AWS (Amazon Web Services) #Data Science #Logging #Azure #"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #Migration #Automation #Data Encryption #Data Privacy
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We are seeking a highly skilled and experienced Senior Databricks Engineer to join our Data Engineering team. This role is critical to designing, developing, and optimizing scalable data solutions using the Databricks Lakehouse platform. The ideal candidate possesses deep expertise in Databricks, Delta Lake, and cloud-based data architectures (Azure). Experience with ML and AI prompt engineering is a plus. This role will involve end-to-end solution design, including transforming business requirements into technical designs and implementation strategies for production.
Responsibilities include:
β’ Design, develop, and maintain scalable ETL/ELT pipelines using Databricks (PySpark, Scala, or SQL).
β’ Architect and optimize Delta Lake implementations to support reliable and performant data lakes.
β’ Implement and enforce data quality, governance, and observability best practices within Databricks.
β’ Collaborate with data scientists, analysts, and stakeholders to understand data needs and translate requirements into robust engineering solutions.
β’ Lead the design and implementation of data solutions, including end-to-end technical strategies for production.
β’ Monitor and optimize data workflows for cost efficiency and performance, especially in cloud environments (Azure, AWS, or GCP).
β’ Lead migration efforts from legacy data systems to Databricks and modern cloud-native architectures.
β’ Mentor junior engineers and establish best practices for code quality, version control, and CI/CD processes.
β’ Participate in data modeling and help enforce medallion architecture standards (bronze, silver, gold layers).
Desired Experience/Education:
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or related field.
β’ 5+ years of experience in data engineering, with at least 2+ years hands-on with Databricks.
β’ Expert-level proficiency in Spark (PySpark and/or Scala) and SQL.
β’ Deep experience with Delta Lake, data lakehouse architectures, and advanced performance optimization techniques (e.g., adaptive query execution, optimized writes, Z-ordering, caching strategies).
β’ Experience with data lineage and data catalog tools (e.g., Unity Catalog, Azure Purview).
β’ Proficiency in building and managing streaming data pipelines using Structured Streaming in Databricks.
β’ Strong knowledge of data encryption, masking, and data privacy best practices.
β’ Strong experience with orchestration tools (e.g., Airflow, Azure Data Factory, Databricks Workflows).
β’ Proficiency with monitoring and logging solutions
β’ Proven experience in technical design, solution architecture, and implementation strategy.
β’ Strong knowledge of Azure Data Lake, Azure Data Factory, and Azure Synapse (or equivalents in AWS/GCP).
β’ Proficiency in CI/CD for data pipelines (e.g., GitHub Actions, Azure DevOps, Jenkins).
β’ Solid understanding of data security, identity management, and access control in the cloud.
β’ Familiarity with Infrastructure-as-Code (IaC) tools (e.g., Terraform, ARM templates, CloudFormation).
Other Desired Skills/Certifications:
β’ Experience with ML and AI prompt engineering
β’ Experience implementing medallion architecture in production environments.
β’ Databricks certifications (e.g., Databricks Certified Data Engineer Professional).
β’ Familiarity with MLflow, Unity Catalog, and Databricks Delta Live Tables (DLT).
β’ Excellent problem-solving and troubleshooting skills.
β’ Strong communication skills, both written and verbal.
β’ Ability to work independently and lead cross-functional initiatives.
β’ Passion for automation, reusability, and performance optimization.