Brilliant®

Sr. Databricks SME

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Databricks SME, a 6-month remote contract position with a pay rate of $70-$85/hr. Requires 7+ years in Data Engineering, deep Databricks experience, and strong Azure skills. Client-facing communication is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date
May 16, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Pittsburgh, PA
-
🧠 - Skills detailed
#Snowflake #Azure DevOps #Leadership #Apache Kafka #Cloud #GitHub #Spark (Apache Spark) #Azure cloud #Kafka (Apache Kafka) #Scala #Airflow #Delta Lake #Terraform #Apache Spark #ML (Machine Learning) #Databricks #Data Engineering #Strategy #Azure #Apache Airflow #Monitoring #Batch #dbt (data build tool) #DevOps #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #Consulting
Role description
• • • This role is not open to 3rd Party C2C candidates or sponsorship at this time • • • Job Title: Sr. Databricks Engineer Location: 100% Remote, but ideally must sit in one of these states: PA, GA, TX, FL, CO Job Type: 6 month contract to hire. Pay Range: $70/HR - $85/HR. Conversion Salary in the 150-160k ballpark. Potential flex on both. Benefits: Heathcare, 401k About the Opportunity • Our client is a leading advisory and innovation consulting firm that helps organizations design and build modern data, cloud, and AI-enabled solutions. They partner with enterprise clients across complex and highly regulated industries to modernize operating models, build scalable data platforms, and accelerate digital transformation. • They are seeking a Senior Databricks Engineer with deep experience designing and building enterprise-grade data platforms using Databricks in Azure environments. This is a highly hands-on role for someone who has built large-scale data solutions from the ground up and can confidently guide architecture decisions, delivery strategy, and engineering best practices in client-facing environments. • This individual must be able to operate as a Databricks subject matter expert, balancing hands-on development with architecture leadership and direct client interaction. Key Responsibilities • Design and build enterprise-scale data platforms using Databricks on Azure. • Architect and implement lakehouse solutions using Delta Lake, Unity Catalog, and Apache Spark. • Build and optimize batch and streaming pipelines for high-volume enterprise data. • Design medallion architecture environments from greenfield through production. • Build scalable data models to support analytics, reporting, and downstream applications. • Integrate Databricks with APIs, event streams, cloud warehouses, and ML workflows. • Optimize Spark workloads for performance, reliability, and cost efficiency. • Support CI/CD, testing, release management, and production operations. • Partner directly with enterprise clients to translate requirements into scalable technical solutions. • Mentor engineers and help establish reusable engineering patterns and standards. Required Qualifications • 7+ years in Data Engineering or Data Platform Engineering. • Deep hands-on experience with Databricks in production enterprise environments. • Strong Azure cloud experience. • Strong experience with: • Databricks • Delta Lake • Apache Spark • Unity Catalog • Batch and streaming pipeline design • Performance tuning and cost optimization • Strong client-facing communication skills and ability to articulate architectural tradeoffs. Desired Skills Advanced experience with: • Declarative Pipelines / Delta Live Tables • Workflow orchestration • Governance and access control • Operational monitoring • Lakehouse architecture best practices Infrastructure-as-code experience using: • Terraform • Bicep CI/CD experience using: • GitHub • Azure DevOps Experience with complementary modern data tools such as: • Snowflake • dbt • Apache Airflow • Apache Kafka