

PRACYVA
Senior Azure Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Azure Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Azure Data Factory, Databricks, SparkSQL, and Python. Proven experience in metadata-driven architecture is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 12, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Salford, England, United Kingdom
-
🧠 - Skills detailed
#Pandas #GIT #Scala #Data Pipeline #Spark (Apache Spark) #Programming #DevOps #Metadata #Data Quality #Azure DevOps #Python #Databricks #Azure Data Factory #Compliance #"ETL (Extract #Transform #Load)" #Deployment #Leadership #Data Manipulation #Azure #ADF (Azure Data Factory) #Data Framework #Data Engineering #Data Processing
Role description
ob description:
Key Responsibilities
• Design, develop, and maintain metadata-driven data pipelines using ADF and Databricks.
• Build and implement end-to-end metadata frameworks, ensuring scalability and reusability.
• Optimize data workflows leveraging SparkSQL and Pandas for large-scale data processing.
• Collaborate with cross-functional teams to integrate data solutions into enterprise architecture.
• Implement CI/CD pipelines for automated deployment and testing of data solutions.
• Ensure data quality, governance, and compliance with organizational standards.
• Provide technical leadership and take complete ownership of assigned projects.
Technical Skills Required
• Azure Data Factory (ADF): Expertise in building and orchestrating data pipelines.
• Databricks: Hands-on experience with notebooks, clusters, and job scheduling.
• Pandas: Advanced data manipulation and transformation skills.
• SparkSQL: Strong knowledge of distributed data processing and query optimization.
• CI/CD: Experience with tools like Azure DevOps, Git, or similar for automated deployments.
• Metadata-driven architecture: Proven experience in designing and implementing metadata frameworks.
• Programming: Proficiency in Python and/or Scala for data engineering tasks
ob description:
Key Responsibilities
• Design, develop, and maintain metadata-driven data pipelines using ADF and Databricks.
• Build and implement end-to-end metadata frameworks, ensuring scalability and reusability.
• Optimize data workflows leveraging SparkSQL and Pandas for large-scale data processing.
• Collaborate with cross-functional teams to integrate data solutions into enterprise architecture.
• Implement CI/CD pipelines for automated deployment and testing of data solutions.
• Ensure data quality, governance, and compliance with organizational standards.
• Provide technical leadership and take complete ownership of assigned projects.
Technical Skills Required
• Azure Data Factory (ADF): Expertise in building and orchestrating data pipelines.
• Databricks: Hands-on experience with notebooks, clusters, and job scheduling.
• Pandas: Advanced data manipulation and transformation skills.
• SparkSQL: Strong knowledge of distributed data processing and query optimization.
• CI/CD: Experience with tools like Azure DevOps, Git, or similar for automated deployments.
• Metadata-driven architecture: Proven experience in designing and implementing metadata frameworks.
• Programming: Proficiency in Python and/or Scala for data engineering tasks





