

Deloitte
Azure Databricks Cloud Contractor
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Cloud Contractor with a contract length of unspecified duration, offering $87-$93 per hour. It requires 7+ years in data engineering, strong skills in Apache Spark, Databricks, Python, SQL, and Azure data platforms. Hybrid work location.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
744
-
ποΈ - Date
February 20, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
New York, United States
-
π§ - Skills detailed
#Spark (Apache Spark) #Programming #AWS (Amazon Web Services) #Azure #ADF (Azure Data Factory) #Airflow #IAM (Identity and Access Management) #Delta Lake #Vault #Data Quality #Azure ADLS (Azure Data Lake Storage) #Code Reviews #Scrum #"ETL (Extract #Transform #Load)" #Security #Scala #Azure Data Factory #Data Lake #Java #Azure Databricks #BI (Business Intelligence) #ML (Machine Learning) #Stories #GCP (Google Cloud Platform) #ADLS (Azure Data Lake Storage) #Storage #Snowflake #Observability #SQL (Structured Query Language) #"ACID (Atomicity #Consistency #Isolation #Durability)" #Monitoring #Python #Data Vault #Apache Spark #Databricks #Synapse #Azure Synapse Analytics #Batch #PySpark #Data Modeling #Cloud
Role description
Position Summary
Contractor
Hybrid
Work you'll do
Work Youβll Do
The Service your will provide the Deloitte Project Team as a Azure Databricks Contractor include, working closely with business stakeholders, solution architects, product owner, scrum master, and subject matter experts (SMEs) to understand requirements and deliver high-quality, scalable solutions on Azure Databricks.
Responsibilities Summary
β’ Solution design and delivery for Databricks-based platforms and products (batch/streaming, BI-ready models, ML workflows).
β’ Translate business requirements into reference architectures, implementation plans, and sprint-ready technical stories.
β’ Serve as technical decision-maker for tradeoffs (cost, performance, latency, reliability, maintainability).
β’ Design and implement Lakehouse patterns (Bronze/Silver/Gold), medallion architecture, and domain-oriented data products where applicable.
β’ Build and optimize ETL/ELT using Apache Spark, Databricks SQL, and orchestrators (e.g., Workflows, ADF, Airflow).
β’ Implement streaming use cases (e.g., Spark Structured Streaming, Delta Live Tables where appropriate).
β’ Establish data modeling standards (star/snowflake, Data Vault where relevant) and performance tuning practices.
β’ Implement access controls, auditing, and governance with Unity Catalog (RBAC/ABAC patterns, lineage, data sharing policies).
β’ Ensure production readiness: CI/CD, monitoring/alerting, runbooks, incident response, and SLAs/SLOs.
β’ Drive data quality practices (tests, expectations, reconciliation, observability).
β’ Define MLOps standards (experiment tracking, reproducibility, champion/challenger, drift monitoring).
β’ Mentor engineers; conduct design reviews, code reviews, and set engineering standards.
β’ Partner with product owners, data owners, security, and platform teams; communicate status, risks, and options clearly.
Position Requirements
β’ 7+ years in data/platform/analytics engineering, including 2+ years leading technical teams or workstreams.
β’ Proven production delivery on Databricks (with strong Azure Databricks experience preferred).
β’ Strong Apache Spark expertise (PySpark/Scala): distributed processing, troubleshooting, and performance tuning.
β’ Deep Delta Lake knowledge: ACID tables, compaction, Z-Ordering, schema evolution, and batch/streaming patterns.
β’ Experience building scalable batch and streaming pipelines, including orchestration/operationalization (scheduling, dependencies, retries, idempotency).
β’ Strong Azure data platform background, including Azure Data Lake Storage (ADLS) architecture and best practices; familiarity with Azure Data Factory (ADF), Azure Synapse Analytics, and related services.
β’ Advanced programming skills in Python and SQL, plus hands-on Java experience (e.g., integrations/services or Spark/platform utilities).
β’ Cloud fundamentals in at least one major platform (Azure/AWS/Google Cloud Platform (GCP)): identity and access management (IAM), storage, networking basics, and cost controls.
β’ Working experience in Java (e.g., building integrations/services, Spark/streaming components, or platform utilities).
The expected pay range for this contract assignment is $87-$93.00 per hour. The exact pay rate will vary based on skills, experience, and location and will be determined by the third-party whose employees provide services to Deloitte.
Candidates interested in applying for this opportunity must be geographically based in the United States and must be legally authorized to work in the United States without the need for employer sponsorship
We do not accept agency resumes and are not responsible for any fees related to unsolicited resumes.
Deloitte is not the employer for this role.
This work is contracted through a third-party whose employees provide services to Deloitte.
#contract
Expected Work Schedule
During teamβs core business hours
Approximate hours per week
About Deloitte
Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It makes Deloitte one of the most rewarding places to work.
As used in this posting, βDeloitteβ means , a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries.
Requisition code: 323973
Position Summary
Contractor
Hybrid
Work you'll do
Work Youβll Do
The Service your will provide the Deloitte Project Team as a Azure Databricks Contractor include, working closely with business stakeholders, solution architects, product owner, scrum master, and subject matter experts (SMEs) to understand requirements and deliver high-quality, scalable solutions on Azure Databricks.
Responsibilities Summary
β’ Solution design and delivery for Databricks-based platforms and products (batch/streaming, BI-ready models, ML workflows).
β’ Translate business requirements into reference architectures, implementation plans, and sprint-ready technical stories.
β’ Serve as technical decision-maker for tradeoffs (cost, performance, latency, reliability, maintainability).
β’ Design and implement Lakehouse patterns (Bronze/Silver/Gold), medallion architecture, and domain-oriented data products where applicable.
β’ Build and optimize ETL/ELT using Apache Spark, Databricks SQL, and orchestrators (e.g., Workflows, ADF, Airflow).
β’ Implement streaming use cases (e.g., Spark Structured Streaming, Delta Live Tables where appropriate).
β’ Establish data modeling standards (star/snowflake, Data Vault where relevant) and performance tuning practices.
β’ Implement access controls, auditing, and governance with Unity Catalog (RBAC/ABAC patterns, lineage, data sharing policies).
β’ Ensure production readiness: CI/CD, monitoring/alerting, runbooks, incident response, and SLAs/SLOs.
β’ Drive data quality practices (tests, expectations, reconciliation, observability).
β’ Define MLOps standards (experiment tracking, reproducibility, champion/challenger, drift monitoring).
β’ Mentor engineers; conduct design reviews, code reviews, and set engineering standards.
β’ Partner with product owners, data owners, security, and platform teams; communicate status, risks, and options clearly.
Position Requirements
β’ 7+ years in data/platform/analytics engineering, including 2+ years leading technical teams or workstreams.
β’ Proven production delivery on Databricks (with strong Azure Databricks experience preferred).
β’ Strong Apache Spark expertise (PySpark/Scala): distributed processing, troubleshooting, and performance tuning.
β’ Deep Delta Lake knowledge: ACID tables, compaction, Z-Ordering, schema evolution, and batch/streaming patterns.
β’ Experience building scalable batch and streaming pipelines, including orchestration/operationalization (scheduling, dependencies, retries, idempotency).
β’ Strong Azure data platform background, including Azure Data Lake Storage (ADLS) architecture and best practices; familiarity with Azure Data Factory (ADF), Azure Synapse Analytics, and related services.
β’ Advanced programming skills in Python and SQL, plus hands-on Java experience (e.g., integrations/services or Spark/platform utilities).
β’ Cloud fundamentals in at least one major platform (Azure/AWS/Google Cloud Platform (GCP)): identity and access management (IAM), storage, networking basics, and cost controls.
β’ Working experience in Java (e.g., building integrations/services, Spark/streaming components, or platform utilities).
The expected pay range for this contract assignment is $87-$93.00 per hour. The exact pay rate will vary based on skills, experience, and location and will be determined by the third-party whose employees provide services to Deloitte.
Candidates interested in applying for this opportunity must be geographically based in the United States and must be legally authorized to work in the United States without the need for employer sponsorship
We do not accept agency resumes and are not responsible for any fees related to unsolicited resumes.
Deloitte is not the employer for this role.
This work is contracted through a third-party whose employees provide services to Deloitte.
#contract
Expected Work Schedule
During teamβs core business hours
Approximate hours per week
About Deloitte
Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It makes Deloitte one of the most rewarding places to work.
As used in this posting, βDeloitteβ means , a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries.
Requisition code: 323973





