

Azure Data Engineer ($50/hr)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer ($50/hr) with a contract length of "unknown" and requires 3–7 years of data engineering experience. Key skills include Databricks, Azure Data Factory, SQL, and PySpark. A degree in Engineering or Computer Science is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
400
-
🗓️ - Date discovered
August 28, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Arlington, VA
-
🧠 - Skills detailed
#DevOps #Scala #Delta Lake #Unit Testing #Azure DevOps #Debugging #Azure Logic Apps #Azure Data Factory #Data Quality #ADF (Azure Data Factory) #Azure #Data Lake #SQL (Structured Query Language) #Synapse #Logic Apps #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #UAT (User Acceptance Testing) #PySpark #Databricks #Data Integration #Documentation #Data Modeling #Data Transformations #Integration Testing #Cloud #Data Engineering #Computer Science #Azure Function Apps #Data Pipeline #Python #Datasets
Role description
We are seeking a skilled Azure Data Engineer to design, build, and maintain scalable data solutions in our cloud environment. The ideal candidate will have hands-on expertise in Databricks, Azure Data Factory, and Synapse, with strong engineering and problem-solving skills. This role involves creating efficient data pipelines, optimizing performance, and supporting business-critical data platforms.
Key Responsibilities
• Design and develop new data pipelines leveraging existing frameworks and tools.
• Orchestrate and monitor pipelines using Azure Data Factory (ADF).
• Build and enhance data transformations with Databricks, PySpark, and SQL to load data into Enterprise Data Lake, Delta Lake, and Synapse Analytics (DWH).
• Implement unit testing, coordinate integration testing, and support UAT cycles.
• Prepare technical documentation including HLDs, detailed designs, and runbooks for data pipelines.
• Configure compute environments, data quality (DQ) rules, and pipeline maintenance routines.
• Optimize and tune data workflows for performance and cost efficiency.
• Provide production support for data platforms and pipelines.
Required Skills & Experience
• 3–5 years (Associate) or 5–7 years (Senior) of hands-on data engineering experience.
• Strong knowledge of Databricks (Data Engineering, DLT), Azure Data Factory, SQL, and PySpark.
• Experience with Azure Synapse (Dedicated SQL Pool), Azure DevOps, Python, Azure Function Apps, and Azure Logic Apps.
• Familiarity with Precisely (nice to have).
• Solid understanding of data modeling, data integration, and performance tuning.
• Strong problem-solving and debugging skills in a production environment.
• Degree in Engineering, Computer Science, or a related field.
Nice to Have
• Exposure to data quality and governance tools such as Precisely.
• Experience with enterprise-scale data platforms and large datasets.
We are seeking a skilled Azure Data Engineer to design, build, and maintain scalable data solutions in our cloud environment. The ideal candidate will have hands-on expertise in Databricks, Azure Data Factory, and Synapse, with strong engineering and problem-solving skills. This role involves creating efficient data pipelines, optimizing performance, and supporting business-critical data platforms.
Key Responsibilities
• Design and develop new data pipelines leveraging existing frameworks and tools.
• Orchestrate and monitor pipelines using Azure Data Factory (ADF).
• Build and enhance data transformations with Databricks, PySpark, and SQL to load data into Enterprise Data Lake, Delta Lake, and Synapse Analytics (DWH).
• Implement unit testing, coordinate integration testing, and support UAT cycles.
• Prepare technical documentation including HLDs, detailed designs, and runbooks for data pipelines.
• Configure compute environments, data quality (DQ) rules, and pipeline maintenance routines.
• Optimize and tune data workflows for performance and cost efficiency.
• Provide production support for data platforms and pipelines.
Required Skills & Experience
• 3–5 years (Associate) or 5–7 years (Senior) of hands-on data engineering experience.
• Strong knowledge of Databricks (Data Engineering, DLT), Azure Data Factory, SQL, and PySpark.
• Experience with Azure Synapse (Dedicated SQL Pool), Azure DevOps, Python, Azure Function Apps, and Azure Logic Apps.
• Familiarity with Precisely (nice to have).
• Solid understanding of data modeling, data integration, and performance tuning.
• Strong problem-solving and debugging skills in a production environment.
• Degree in Engineering, Computer Science, or a related field.
Nice to Have
• Exposure to data quality and governance tools such as Precisely.
• Experience with enterprise-scale data platforms and large datasets.