

Response Informatics
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, contract length unspecified, offering a competitive pay rate. Key skills include Azure Data Factory, Databricks, SparkSQL, and CI/CD. Proven experience in metadata-driven architecture and proficiency in Python or Scala are required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 19, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Salford, England, United Kingdom
-
🧠 - Skills detailed
#Scala #Data Pipeline #Leadership #ADF (Azure Data Factory) #DevOps #Data Processing #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Deployment #Metadata #Databricks #Data Engineering #Azure DevOps #Python #Data Manipulation #Programming #Azure Data Factory #Data Quality #GIT #Data Framework #Pandas #Compliance #Azure
Role description
Job description:
Key Responsibilities
• Design, develop, and maintain metadata-driven data pipelines using ADF and Databricks.
• Build and implement end-to-end metadata frameworks, ensuring scalability and reusability.
• Optimize data workflows leveraging SparkSQL and Pandas for large-scale data processing.
• Collaborate with cross-functional teams to integrate data solutions into enterprise architecture.
• Implement CI/CD pipelines for automated deployment and testing of data solutions.
• Ensure data quality, governance, and compliance with organizational standards.
• Provide technical leadership and take complete ownership of assigned projects.
Technical Skills Required
• Azure Data Factory (ADF): Expertise in building and orchestrating data pipelines.
• Databricks: Hands-on experience with notebooks, clusters, and job scheduling.
• Pandas: Advanced data manipulation and transformation skills.
• SparkSQL: Strong knowledge of distributed data processing and query optimization.
• CI/CD: Experience with tools like Azure DevOps, Git, or similar for automated deployments.
• Metadata-driven architecture: Proven experience in designing and implementing metadata frameworks.
• Programming: Proficiency in Python and/or Scala for data engineering tasks
Job description:
Key Responsibilities
• Design, develop, and maintain metadata-driven data pipelines using ADF and Databricks.
• Build and implement end-to-end metadata frameworks, ensuring scalability and reusability.
• Optimize data workflows leveraging SparkSQL and Pandas for large-scale data processing.
• Collaborate with cross-functional teams to integrate data solutions into enterprise architecture.
• Implement CI/CD pipelines for automated deployment and testing of data solutions.
• Ensure data quality, governance, and compliance with organizational standards.
• Provide technical leadership and take complete ownership of assigned projects.
Technical Skills Required
• Azure Data Factory (ADF): Expertise in building and orchestrating data pipelines.
• Databricks: Hands-on experience with notebooks, clusters, and job scheduling.
• Pandas: Advanced data manipulation and transformation skills.
• SparkSQL: Strong knowledge of distributed data processing and query optimization.
• CI/CD: Experience with tools like Azure DevOps, Git, or similar for automated deployments.
• Metadata-driven architecture: Proven experience in designing and implementing metadata frameworks.
• Programming: Proficiency in Python and/or Scala for data engineering tasks






