

Azure Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer with a contract length of "X months" and a pay rate of "$X/hour". Required skills include Azure Data Factory, Databricks, SQL, and data governance. Candidates should have 6–10 years of experience, including 3+ years on Azure.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 8, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Washington, DC
-
🧠 - Skills detailed
#Data Quality #BI (Business Intelligence) #Datasets #"ETL (Extract #Transform #Load)" #Synapse #Classification #Data Lake #Security #Scala #Cloud #SQL (Structured Query Language) #Python #Data Management #Azure Data Factory #Azure Databricks #Batch #Compliance #Data Pipeline #ADF (Azure Data Factory) #Agile #PySpark #DevOps #Azure #Collibra #Data Catalog #Databricks #Informatica #Data Engineering #Microsoft Power BI #Documentation #Monitoring #Data Architecture #Tableau #Visualization #Spark (Apache Spark) #GitHub #Metadata #Azure DevOps #Data Governance #Azure SQL #Data Lineage #Logging
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description
We are looking for a highly skilled Azure Data Engineer to support enterprise-scale data management and analytics transformation initiatives. This role focuses on delivering data engineering pipelines, platform integrations, and supporting advanced analytics, governance, and data products across Azure-based environments. The engineer will work closely with data architects, governance leads, and analytics teams to develop secure, scalable, and reusable data infrastructure.
You’ll be instrumental in the development of integrated data pipelines, cloud-native services, metadata management, and supporting self-service data products using platforms like Databricks, Microsoft Purview, Collibra, Informatica IDMC, and Power BI.
Key Responsibilities
• Develop ETL/ELT pipelines using Azure Data Factory, Databricks, Synapse, etc.
• Build data lake/warehouse architectures for structured and unstructured data.
• Perform data curation, transformation, validation, and enrichment at scale.
• Implement monitoring, logging, and data quality checks within data flows.
• Enable self-service analytics by curating certified datasets and exposing them securely.
• Deliver reusable, secure, and well-documented APIs for downstream applications.
• Collaborate with analytics teams to support real-time and batch analytics needs.
• Integrate data pipelines with visualization platforms such as Power BI or Tableau.
• Configure Microsoft Purview, Collibra, or Informatica IDMC for data lineage, classification, and policy enforcement.
• Enable metadata management, data catalogs, and automated lineage capture.
• Support access control mechanisms (RBAC) and sensitive data tagging.
• Collaborate with governance and compliance teams to implement regulatory controls.
• Provide platform operations support including pipeline performance optimization.
Qualifications
• 6–10 years of experience in data engineering, with at least 3+ years on Azure.
• Proven hands-on experience in Azure Data Factory, Azure Synapse, Azure SQL, Databricks.
• Experience with metadata management tools like Microsoft Purview, Collibra, or Informatica IDMC.
• Strong knowledge of data governance, data catalogs, and compliance frameworks.
• Experience building scalable data lakes, warehouses, and lakehouses.
• Familiarity with DevOps practices and CI/CD pipelines using Azure DevOps or GitHub Actions.
• Proficiency in Python, SQL, Spark (PySpark or Scala).
• Understanding of security practices (data masking, RBAC, PII handling).
• Good communication and documentation skills in agile team environments.
Skills: spark (pyspark or scala),sql,problem solving,python,data governance,pyspark,microsoft purview,etl,databricks,semantic layers,adf,data engineering,collibra,azure synapse,informatica,agile,azure databricks,azure devops,informatica idmc,power bi,azure data factory,github actions,ci/cd pipelines,azure sql,data catalogs