Bayforce

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 1-year contract, remote within ET or CT time zones. Requires 5+ years of Azure data engineering experience, strong PySpark, SQL, and API integration skills, plus a Bachelor's degree in a related field.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 4, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Quality #Spark (Apache Spark) #Data Warehouse #SQL (Structured Query Language) #Data Catalog #Logging #Microsoft Power BI #Scala #Synapse #Data Pipeline #Data Lake #ADF (Azure Data Factory) #Compliance #Azure #Data Architecture #Azure DevOps #Monitoring #Azure Data Factory #Dataflow #Deployment #DevOps #"ETL (Extract #Transform #Load)" #Documentation #Datasets #PySpark #REST (Representational State Transfer) #Security #Azure cloud #Code Reviews #Data Integration #Cloud #Computer Science #API (Application Programming Interface) #BI (Business Intelligence) #Data Engineering #GitHub
Role description
Role Title: Senior Data Engineer Employment Type: Contract Duration: 1 year Preferred Location: Remote based in ET or CT time zones Role Description: The Senior Data Engineer will play a pivotal role in designing, architecting, and optimizing cloud-native data integration and Lakehouse solutions on Azure, with a strong emphasis on Microsoft Fabric adoption, PySpark/Spark-based transformations, and orchestrated pipelines. This role will lead end-to-end data engineering from ingestion through APIs and Azure services to curated Lakehouse/warehouse layers while ensuring scalable, secure, well-governed, and well-documented data products. The ideal candidate is hands-on in delivery and also brings data architecture knowledge to help shape patterns, standards, and solution designs. Key Responsibilities β€’ Design and implement end-to-end data pipelines and ELT/ETL workflows using Azure Data Factory (ADF), Synapse, and Microsoft Fabric. β€’ Build and optimize PySpark/Spark transformations for large-scale processing, applying best practices for performance tuning (partitioning, joins, file sizing, incremental loads). β€’ Develop and maintain API-heavy ingestion patterns, including REST/SOAP integrations, authentication/authorization handling, throttling, retries, and robust error handling. β€’ Architect scalable ingestion, transformation, and serving solutions using Azure Data Lake / OneLake, Lakehouse patterns (Bronze/Silver/Gold), and data warehouse modeling practices. β€’ Partner with stakeholders to define solution approaches and provide architecture input (data flow design, integration patterns, target-state roadmap, and platform best practices). β€’ Implement monitoring, logging, alerting, and operational runbooks for production pipelines; support incident triage and root-cause analysis. β€’ Apply governance and security practices across the lifecycle, including access controls, data quality checks, lineage, and compliance requirements. β€’ Write complex SQL, develop data models, and enable downstream consumption through analytics tools and curated datasets. β€’ Drive engineering standards: reusable patterns, code reviews, documentation, source control, and CI/CD practices. Requirements: β€’ Bachelor's degree (or equivalent experience) in Computer Science, Engineering, or a related field. β€’ 5+ years of experience in data engineering with strong focus on Azure Cloud. β€’ Strong experience with Azure Data Factory pipelines, orchestration patterns, parameterization, and production support. β€’ Strong hands-on experience with Synapse (pipelines, SQL pools and/or Spark), and modern cloud data platform patterns. β€’ Advanced PySpark/Spark experience for complex transformations and performance optimization. β€’ Heavy experience with API-based integrations (building ingestion frameworks, handling auth, pagination, retries, rate limits, and resiliency). β€’ Strong knowledge of SQL and data warehousing concepts (dimensional modeling, incremental processing, data quality validation). β€’ Strong understanding of cloud data architectures including Data Lake, Lakehouse, and Data Warehouse patterns. β€’ Demonstrated ability to troubleshoot and optimize complex data workflows in production environments. β€’ Excellent communication and collaboration skills. Preferred Skills β€’ Experience with Microsoft Fabric (Lakehouse/Warehouse/OneLake, Pipelines, Dataflows Gen2, notebooks). β€’ Architecture experience (formal or informal), such as contributing to solution designs, reference architectures, integration standards, and platform governance. β€’ Experience with DevOps/CI-CD for data engineering using Azure DevOps or GitHub (deployment patterns, code promotion, testing). β€’ Experience with Power BI and semantic model considerations for Lakehouse/warehouse-backed reporting. β€’ Familiarity with data catalog/governance tooling (e.g., Microsoft Purview).