Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer (Contract, 6–12 months) focused on Microsoft Fabric and Azure Cloud. Key skills include Azure Data Services, Databricks, Python, SQL, and Power BI. Previous consulting experience is preferred; Microsoft certifications are a plus. Remote work.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 17, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Terraform #Datasets #Deployment #Data Modeling #DevOps #Data Engineering #Python #Azure Data Factory #Spark (Apache Spark) #Automation #Cloud #Synapse #Data Lineage #ADF (Azure Data Factory) #Microsoft Power BI #ML (Machine Learning) #Azure #"ETL (Extract #Transform #Load)" #Delta Lake #Consulting #Kafka (Apache Kafka) #Data Lake #Databricks #BI (Business Intelligence) #Azure cloud #Scala #Data Quality #SQL (Structured Query Language) #Data Pipeline #Programming
Role description
Senior Data Engineer (Contract) – Microsoft Fabric & Azure Cloud Location: Remote Type: Contract (6–12 months, with potential extension)About the Role Are you passionate about building modern data platforms with cutting-edge cloud technology? We are looking for a Senior Data Engineer to accelerate our data transformation journey, leveraging Microsoft Fabric, Azure Data Services, Databricks, and Power BI. This hands-on consulting role is perfect for those who thrive in fast-paced environments and are driven to deliver enterprise-scale solutions.Key Responsibilities • Design & Develop Scalable Data Pipelines • Architect and build end-to-end data pipelines using Azure Data Factory, Synapse Analytics, Databricks, and Microsoft Fabric, following lakehouse/medallion architecture best practices. • Modern Data Modeling & Lakehouse Management • Develop and optimize data models, Delta Lake structures, and curated datasets to support advanced analytics and reporting. • Enable Advanced Analytics & BI • Collaborate with business users and analysts to deliver actionable insights through Power BI and self-service analytics solutions. • Data Quality & Governance • Implement data quality checks, data lineage tracking, and governance frameworks to ensure reliable, trusted data. • Performance & Cost Optimization • Tune Spark/Databricks jobs and Azure resources for optimal performance, scalability, and cost efficiency. • Automation & DevOps • Build CI/CD pipelines and automate data deployments using Infrastructure-as-Code (ARM, Terraform, etc.). What We’re Looking For • Extensive hands-on experience with Azure Data Services (ADF, Synapse, Data Lake, SQL) and Databricks. • Deep expertise in Microsoft Fabric for pipeline development and orchestration. • Strong programming skills in Python and SQL; Spark experience is a plus. • Proven ability to implement lakehouse/medallion architecture and scalable ETL/ELT frameworks. • Experience delivering business-ready analytics and dashboards using Power BI. • Strong problem-solving skills and a collaborative, team-oriented mindset. • Previous contract or consulting experience is highly preferred. Bonus Points For • Experience with real-time data streaming (Kafka, Event Hubs, Spark Streaming). • Familiarity with MLOps or integration with machine learning workflows. • Microsoft certifications (e.g., Azure Data Engineer, Fabric Data Engineer). Why Join Us? • Exciting Projects: Work on cutting-edge data modernization initiatives. • Remote Flexibility: 100% remote, results-driven work culture. • Innovative Tech Stack: Hands-on with the latest features of Microsoft Fabric, Azure, and Databricks. • Direct Impact: Influence modern data practices and drive meaningful business outcomes.