

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "X months" and a pay rate of "$Y/hour." Located in Indianapolis, IN (Hybrid), it requires expertise in Databricks, Microsoft Fabric, Azure Cloud, and data governance. Certifications in Databricks and Azure Data Engineering are preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 4, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Indianapolis, IN
-
π§ - Skills detailed
#BI (Business Intelligence) #AI (Artificial Intelligence) #Data Quality #Delta Lake #Python #SQL (Structured Query Language) #Azure DevOps #Compliance #Data Science #Spark SQL #GitHub #ML (Machine Learning) #Data Warehouse #Scala #API (Application Programming Interface) #Databricks #DevOps #Spark (Apache Spark) #PySpark #MLflow #GDPR (General Data Protection Regulation) #Observability #Microsoft Power BI #Security #Azure #Vault #ADLS (Azure Data Lake Storage) #JSON (JavaScript Object Notation) #Data Engineering #Cloud #Documentation #"ETL (Extract #Transform #Load)" #Terraform #Kafka (Apache Kafka) #Azure cloud
Role description
Position: Sr. Data Engineer β End to End Databricks & Microsoft Fabric Specialist
Location: Indianapolis, IN (Hybrid)
Contract
Role Objective
β’ Build, operate, and govern production grade data and analytics solutions that span Databricks (Pipelines, Delta Live Tables, Genie, Agent Bricks) and Microsoft Fabric (Data Engineering, Lakehouse, Data Warehouse, Power BI).
β’ Deliver fast, reliable, and cost optimized data flows while maintaining enterprise grade security and observability.
. Core Responsibilities
β’ Architecture & Design
o Design end to end ingestion, transformation, and serving layers across Databricks and Fabric.
o Define data model standards (star schema, CDC, semi structured handling).
β’ Pipeline Development
o Implement CI CD ready pipelines using Databricks Pipelines/Jobs API and Fabric pipelines (Spark SQL, notebooks).
o Enable real time streaming (Event Hub/Kafka β Structured Streaming β Fabric Lakehouse).
β’ Data Quality & Governance
o Register assets in Unity Catalog & Fabric Lakehouse catalog; enforce row level security, data masking, and Purview lineage.
β’ Performance & Cost Optimization
o Tune Spark clusters, leverage Photon & Genie auto tuning.
o Use Fabricβs hot/cold tiers, materialized views, and auto scale compute to keep spend under budget.
β’ Collaboration & Enablement
o Partner with data scientists, analysts, and product owners to translate business needs into reliable data solutions.
o Create reusable templates, documentation, and run knowledge sharing sessions on Databricks & Fabric best practices.
Minimum Required Skills
β’ Databricks β 4 + years with Pipelines, Delta Live Tables, Genie, Agent Bricks; strong PySpark/Scala; Unity Catalog administration.
β’ Microsoft Fabric β 3 + years building Data Engineering, Lakehouse, and Data Warehouse pipelines; proficiency in Fabric notebooks (Spark SQL, Python).
β’ Azure Cloud β ADLS Gen2, Event Hub, Service Bus, Azure Functions, Key Vault, Azure DevOps/GitHub Actions, Terraform/ARM.
β’ Data Modelling β Star schema, CDC, handling JSON/Parquet/Avro.
β’ Governance & Security β Unity Catalog, Microsoft Purview, row level security, GDPR/CCPA compliance.
β’ CI/CD & Testing β Automated unit/integration/end to end tests; GitOps workflow.
β’ Observability β Azure Monitor, Log Analytics, dashboards for pipeline health.
β’ Soft Skills β Clear communication, stakeholder management, self starter in a fast moving team.
Preferred / Nice to Have
β’ Databricks Certified Data Engineer (Associate/Professional).
β’ Microsoft Certified: Azure Data Engineer Associate.
β’ Experience with Genie AI assisted pipeline generation and Fabric Copilot.
β’ Knowledge of Delta Lake Time Travel, Z Ordering, and Fabric Direct Lake query optimizations.
β’ Exposure to MLflow or Azure ML for model served pipelines.
Position: Sr. Data Engineer β End to End Databricks & Microsoft Fabric Specialist
Location: Indianapolis, IN (Hybrid)
Contract
Role Objective
β’ Build, operate, and govern production grade data and analytics solutions that span Databricks (Pipelines, Delta Live Tables, Genie, Agent Bricks) and Microsoft Fabric (Data Engineering, Lakehouse, Data Warehouse, Power BI).
β’ Deliver fast, reliable, and cost optimized data flows while maintaining enterprise grade security and observability.
. Core Responsibilities
β’ Architecture & Design
o Design end to end ingestion, transformation, and serving layers across Databricks and Fabric.
o Define data model standards (star schema, CDC, semi structured handling).
β’ Pipeline Development
o Implement CI CD ready pipelines using Databricks Pipelines/Jobs API and Fabric pipelines (Spark SQL, notebooks).
o Enable real time streaming (Event Hub/Kafka β Structured Streaming β Fabric Lakehouse).
β’ Data Quality & Governance
o Register assets in Unity Catalog & Fabric Lakehouse catalog; enforce row level security, data masking, and Purview lineage.
β’ Performance & Cost Optimization
o Tune Spark clusters, leverage Photon & Genie auto tuning.
o Use Fabricβs hot/cold tiers, materialized views, and auto scale compute to keep spend under budget.
β’ Collaboration & Enablement
o Partner with data scientists, analysts, and product owners to translate business needs into reliable data solutions.
o Create reusable templates, documentation, and run knowledge sharing sessions on Databricks & Fabric best practices.
Minimum Required Skills
β’ Databricks β 4 + years with Pipelines, Delta Live Tables, Genie, Agent Bricks; strong PySpark/Scala; Unity Catalog administration.
β’ Microsoft Fabric β 3 + years building Data Engineering, Lakehouse, and Data Warehouse pipelines; proficiency in Fabric notebooks (Spark SQL, Python).
β’ Azure Cloud β ADLS Gen2, Event Hub, Service Bus, Azure Functions, Key Vault, Azure DevOps/GitHub Actions, Terraform/ARM.
β’ Data Modelling β Star schema, CDC, handling JSON/Parquet/Avro.
β’ Governance & Security β Unity Catalog, Microsoft Purview, row level security, GDPR/CCPA compliance.
β’ CI/CD & Testing β Automated unit/integration/end to end tests; GitOps workflow.
β’ Observability β Azure Monitor, Log Analytics, dashboards for pipeline health.
β’ Soft Skills β Clear communication, stakeholder management, self starter in a fast moving team.
Preferred / Nice to Have
β’ Databricks Certified Data Engineer (Associate/Professional).
β’ Microsoft Certified: Azure Data Engineer Associate.
β’ Experience with Genie AI assisted pipeline generation and Fabric Copilot.
β’ Knowledge of Delta Lake Time Travel, Z Ordering, and Fabric Direct Lake query optimizations.
β’ Exposure to MLflow or Azure ML for model served pipelines.