

Test Yantra
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a contract basis, requiring expertise in Databricks, Azure Data Factory, and Python. The position involves designing data pipelines, ensuring data quality, and mentoring team members. Key skills include ETL, documentation, and data architecture.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 19, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Warwick, England, United Kingdom
-
🧠 - Skills detailed
#Azure Data Factory #"ETL (Extract #Transform #Load)" #Scala #Data Architecture #Python #Documentation #Data Quality #Azure #Data Engineering #Data Pipeline #Monitoring #Databricks #ADF (Azure Data Factory)
Role description
• Build & Deliver: Design, develop, and maintain scalable data pipelines using Databricks, Azure Data Factory, and Python. Knowledge in
• Ensure Quality: Maintain data quality, consistency, and lineage across ingestion, transformation, and delivery layers.
• Orchestrate & Monitor: Implement orchestration, scheduling, and monitoring to ensure reliable data operations.
• Collaborate & Align: Work with Data Architects and Analysts to align pipelines with data models and target architecture.
• Troubleshoot & Optimise: Resolve data issues across development and production environments to maintain platform stability.
• Document & Share: Maintain clear technical documentation and contribute to shared engineering knowledge.
• Support & Mentor: Coach and support team members, helping to raise capability across the data engineering function.
• Build & Deliver: Design, develop, and maintain scalable data pipelines using Databricks, Azure Data Factory, and Python. Knowledge in
• Ensure Quality: Maintain data quality, consistency, and lineage across ingestion, transformation, and delivery layers.
• Orchestrate & Monitor: Implement orchestration, scheduling, and monitoring to ensure reliable data operations.
• Collaborate & Align: Work with Data Architects and Analysts to align pipelines with data models and target architecture.
• Troubleshoot & Optimise: Resolve data issues across development and production environments to maintain platform stability.
• Document & Share: Maintain clear technical documentation and contribute to shared engineering knowledge.
• Support & Mentor: Coach and support team members, helping to raise capability across the data engineering function.






