

Sira Consulting
Azure Databricks Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Azure Databricks Data Engineer" based in Denver, CO, on a 6-month contract. Requires 6-10 years of data engineering experience, expertise in Azure Databricks and Delta Lake, and leadership skills in an onsite-offshore model.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
May 12, 2026
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Denver, CO
-
π§ - Skills detailed
#Azure Databricks #ADF (Azure Data Factory) #Batch #Synapse #GIT #Data Processing #Spark SQL #Security #Spark (Apache Spark) #Data Engineering #Azure #Databricks #Azure Data Factory #Delta Lake #Code Reviews #Data Quality #"ETL (Extract #Transform #Load)" #Data Security #Data Access #PySpark #Data Ingestion #SQL (Structured Query Language)
Role description
We are currently looking for candidates based in Colorado, USA. Applicants from nearby states are also encouraged to apply
Job Title: Azure Databricks Data Engineer
Location: FULLY ONSITE- Denver, CO- ONLY
Job Type: Contract
Duration: 6 Months
Descriptions:
Technical Onsite Lead
β’ Act as the onsite lead for Azure Databricks data engineering initiatives.
β’ Supervise and guide an offshore development team, assigning work, reviewing deliverables, and ensuring adherence to best practices.
β’ Perform code reviews, design reviews, and solution walkthroughs for offshore-developed components.
β’ Ensure alignment between offshore execution and onsite architecture, security, and business requirements.
β’ Proactively identify risks, gaps, and improvement opportunities across the data platform.
β’ Prior experience working in a hybrid onsiteβoffshore delivery model
Technical Responsibilities
β’ Design, build, and orchestrate ETL/ELT pipelines using Azure Databricks.
β’ Implement batch data ingestion and transformations using PySpark and Spark SQL.
β’ Architect and maintain Lakehouse and analytical warehouse models (fact and dimension schemas) leveraging Delta Lake.
β’ Ensure data quality, reliability, lineage, and governance across the data platform.
β’ Collaborate with security and platform teams to enforce data access controls and best practices.
β’ Experience on CI/CD pipelines using Git for Databricks "
"Required Skills & Experience
β’ 6-10 years of hands on experience in data engineering.
β’ Strong, proven expertise in Azure Databricks and Delta Lake.
β’ Advanced proficiency in SQL with a solid understanding of distributed data processing concepts.
β’ Demonstrated experience leading, mentoring, or supervising data engineering teams.
β’ Strong experience performing code reviews and solution design validations.
β’ knowledge of data security, and access management (RBAC).
β’ Preferred / Nice to Have Skills: Experience with Azure Synapse / Fabric, Azure Data Factory. "
Skills: Digital : Databricks~Digital : PySpark~Azure Data Factory~PL/SQL
We are currently looking for candidates based in Colorado, USA. Applicants from nearby states are also encouraged to apply
Job Title: Azure Databricks Data Engineer
Location: FULLY ONSITE- Denver, CO- ONLY
Job Type: Contract
Duration: 6 Months
Descriptions:
Technical Onsite Lead
β’ Act as the onsite lead for Azure Databricks data engineering initiatives.
β’ Supervise and guide an offshore development team, assigning work, reviewing deliverables, and ensuring adherence to best practices.
β’ Perform code reviews, design reviews, and solution walkthroughs for offshore-developed components.
β’ Ensure alignment between offshore execution and onsite architecture, security, and business requirements.
β’ Proactively identify risks, gaps, and improvement opportunities across the data platform.
β’ Prior experience working in a hybrid onsiteβoffshore delivery model
Technical Responsibilities
β’ Design, build, and orchestrate ETL/ELT pipelines using Azure Databricks.
β’ Implement batch data ingestion and transformations using PySpark and Spark SQL.
β’ Architect and maintain Lakehouse and analytical warehouse models (fact and dimension schemas) leveraging Delta Lake.
β’ Ensure data quality, reliability, lineage, and governance across the data platform.
β’ Collaborate with security and platform teams to enforce data access controls and best practices.
β’ Experience on CI/CD pipelines using Git for Databricks "
"Required Skills & Experience
β’ 6-10 years of hands on experience in data engineering.
β’ Strong, proven expertise in Azure Databricks and Delta Lake.
β’ Advanced proficiency in SQL with a solid understanding of distributed data processing concepts.
β’ Demonstrated experience leading, mentoring, or supervising data engineering teams.
β’ Strong experience performing code reviews and solution design validations.
β’ knowledge of data security, and access management (RBAC).
β’ Preferred / Nice to Have Skills: Experience with Azure Synapse / Fabric, Azure Data Factory. "
Skills: Digital : Databricks~Digital : PySpark~Azure Data Factory~PL/SQL






