

Bayforce
Senior Data Engineer – 12-Month Contract
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 12-month contract, remote (ET or CT time zones), offering a competitive pay rate. Requires 5+ years of Azure data engineering experience, expertise in Azure Data Factory, Synapse, and advanced PySpark/Spark skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 3, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Engineering #Logging #Data Catalog #BI (Business Intelligence) #"ETL (Extract #Transform #Load)" #Azure DevOps #Azure #Azure Data Factory #GitHub #Data Quality #Data Pipeline #API (Application Programming Interface) #Synapse #Dataflow #Microsoft Power BI #Documentation #Code Reviews #Spark (Apache Spark) #REST (Representational State Transfer) #Scala #SQL (Structured Query Language) #ADF (Azure Data Factory) #PySpark #Data Lake #Data Processing #Monitoring #Datasets #DevOps #Cloud
Role description
📍 Remote (ET or CT time zones)
🚫 Direct applicants only — we do not work with third parties or vendors.
We’re looking for a Senior Data Engineer who’s ready to lead cloud-native, Azure-based data engineering initiatives while shaping modern Lakehouse patterns across the organization. This role is ideal for someone who blends deep hands-on engineering skills with architectural thinking—and wants to influence platform standards, reusable patterns, and technical direction.
You’ll design and build end-to-end ingestion pipelines, optimize PySpark/Spark transformations, drive Microsoft Fabric adoption, and architect scalable Lakehouse solutions that support high-volume data processing across the enterprise.
What You’ll Do
• Design and deliver end-to-end data pipelines using Azure Data Factory, Synapse, and Microsoft Fabric
• Build high-performance PySpark/Spark transformations using best practices for partitioning, joins, file sizing, and incremental patterns
• Develop API-heavy ingestion frameworks, including REST/SOAP authentication, throttling, retries, and robust error handling
• Architect solutions across Azure Data Lake / OneLake, Lakehouse (Bronze/Silver/Gold), and warehouse layers
• Provide architectural input and help define platform standards, integration patterns, and target-state roadmaps
• Implement monitoring, logging, alerting, runbooks, and support operational triage and RCA
• Apply governance, lineage, access controls, and data quality best practices
• Write complex SQL, design data models, and support downstream analytics with curated datasets
• Drive engineering excellence: reusable patterns, code reviews, documentation, CI/CD, and DevOps alignment
What You Bring
• 5+ years of data engineering with a strong focus on Azure
• Deep experience with Azure Data Factory, orchestration, parameterization, and production operations
• Hands-on with Synapse (pipelines, SQL pools and/or Spark)
• Advanced PySpark/Spark engineering and performance tuning
• Heavy experience with API-based ingestion, including pagination, retries, auth handling, and resiliency
• Strong SQL and data warehousing concepts: dimensional modeling, incremental loads, DQ validation
• Solid understanding of Data Lake, Lakehouse, and DW architectures
• Ability to troubleshoot, optimize, and support production-grade data pipelines
• Strong communication and cross-team collaboration skills
Bonus Skills
• Experience with Microsoft Fabric (Lakehouse, Warehouse, OneLake, Dataflows, notebooks)
• Architecture experience contributing to patterns, design reviews, and governance
• Data engineering CI/CD experience using Azure DevOps or GitHub
• Knowledge of Power BI and semantic modeling for Lakehouse/Warehouse reporting
• Familiarity with data catalog and governance tools (e.g., Microsoft Purview)
📍 Remote (ET or CT time zones)
🚫 Direct applicants only — we do not work with third parties or vendors.
We’re looking for a Senior Data Engineer who’s ready to lead cloud-native, Azure-based data engineering initiatives while shaping modern Lakehouse patterns across the organization. This role is ideal for someone who blends deep hands-on engineering skills with architectural thinking—and wants to influence platform standards, reusable patterns, and technical direction.
You’ll design and build end-to-end ingestion pipelines, optimize PySpark/Spark transformations, drive Microsoft Fabric adoption, and architect scalable Lakehouse solutions that support high-volume data processing across the enterprise.
What You’ll Do
• Design and deliver end-to-end data pipelines using Azure Data Factory, Synapse, and Microsoft Fabric
• Build high-performance PySpark/Spark transformations using best practices for partitioning, joins, file sizing, and incremental patterns
• Develop API-heavy ingestion frameworks, including REST/SOAP authentication, throttling, retries, and robust error handling
• Architect solutions across Azure Data Lake / OneLake, Lakehouse (Bronze/Silver/Gold), and warehouse layers
• Provide architectural input and help define platform standards, integration patterns, and target-state roadmaps
• Implement monitoring, logging, alerting, runbooks, and support operational triage and RCA
• Apply governance, lineage, access controls, and data quality best practices
• Write complex SQL, design data models, and support downstream analytics with curated datasets
• Drive engineering excellence: reusable patterns, code reviews, documentation, CI/CD, and DevOps alignment
What You Bring
• 5+ years of data engineering with a strong focus on Azure
• Deep experience with Azure Data Factory, orchestration, parameterization, and production operations
• Hands-on with Synapse (pipelines, SQL pools and/or Spark)
• Advanced PySpark/Spark engineering and performance tuning
• Heavy experience with API-based ingestion, including pagination, retries, auth handling, and resiliency
• Strong SQL and data warehousing concepts: dimensional modeling, incremental loads, DQ validation
• Solid understanding of Data Lake, Lakehouse, and DW architectures
• Ability to troubleshoot, optimize, and support production-grade data pipelines
• Strong communication and cross-team collaboration skills
Bonus Skills
• Experience with Microsoft Fabric (Lakehouse, Warehouse, OneLake, Dataflows, notebooks)
• Architecture experience contributing to patterns, design reviews, and governance
• Data engineering CI/CD experience using Azure DevOps or GitHub
• Knowledge of Power BI and semantic modeling for Lakehouse/Warehouse reporting
• Familiarity with data catalog and governance tools (e.g., Microsoft Purview)






