Sr Data Engineer (Azure) ($50/hr)

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
400
-
πŸ—“οΈ - Date discovered
September 9, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Arlington, VA
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Synapse #EDW (Enterprise Data Warehouse) #Azure #Documentation #Data Lake #Automation #Unit Testing #Data Warehouse #Spark (Apache Spark) #Scala #UAT (User Acceptance Testing) #Azure Data Factory #Data Modeling #Delta Lake #Python #Data Governance #Databricks #Azure cloud #Cloud #Data Quality #Data Pipeline #Azure Logic Apps #Logic Apps #"ETL (Extract #Transform #Load)" #Azure DevOps #Data Integration #Version Control #Integration Testing #PySpark #Azure Function Apps #Computer Science #ADF (Azure Data Factory) #Data Engineering #Data Transformations #DevOps
Role description
We are seeking a highly skilled Senior Data Engineer to design, build, and optimize large-scale data solutions on the Azure platform. The ideal candidate will have hands-on expertise with Databricks, PySpark, Azure Data Factory, Synapse, and other Azure cloud services, combined with strong SQL and Python skills. This role requires both engineering depth and a collaborative mindset to deliver robust, high-performing data pipelines and support enterprise-wide analytics. Key Responsibilities β€’ Design and develop scalable data pipelines leveraging existing ingestion frameworks and tools. β€’ Orchestrate and monitor pipelines using Azure Data Factory (ADF) and Delta Live Tables (DLT). β€’ Build and enhance data transformations to parse, transform, and load data into Enterprise Data Lake, Delta Lake, and Enterprise Data Warehouse (Synapse Analytics). β€’ Configure compute resources, define data quality (DQ) rules, and perform pipeline maintenance. β€’ Conduct unit testing, coordinate integration testing, and support UAT cycles. β€’ Prepare and maintain documentation including high-level design (HLD), detailed design (DD), and runbooks for data pipelines. β€’ Optimize and tune performance of ETL processes and data pipelines. β€’ Provide production support for deployed data solutions, troubleshooting and resolving issues promptly. β€’ Collaborate with cross-functional teams and contribute to design and architecture discussions. Must-Have Skills β€’ Hands-on experience with Databricks (Data Engineering), PySpark, and Delta Live Tables (DLT). β€’ Strong expertise with Azure Data Factory, Azure Synapse (Dedicated SQL Pool), and SQL. β€’ Proficiency in Python, with additional experience in Azure Function Apps and Azure Logic Apps. β€’ Experience with Azure DevOps for CI/CD, version control, and pipeline automation. β€’ Strong background in performance tuning and data pipeline optimization. β€’ Proven ability to create technical documentation (HLD, DD, runbooks). β€’ Solid understanding of data modeling, ETL design, and cloud-based data warehousing. Good-to-Have Skills β€’ Experience with Precisely or similar data integration/quality tools. β€’ Exposure to enterprise-scale data governance practices. Qualifications β€’ Bachelor’s or Master’s degree in Engineering, Computer Science, or a related field. β€’ 5–7 years of hands-on data engineering experience (3–5 years for Associate-level). β€’ Strong problem-solving skills with ability to work independently and as part of a team. β€’ Excellent communication and collaboration skills, with experience working in enterprise environments.