

Randstad Sourceright
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer located in "Waltham Petcare Science Institute" with a 12-month contract, offering a pay rate of "unknown." Key skills include Azure Data Factory, Spark, and Python/PySpark, with experience in life science data preferred.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
1000
-
🗓️ - Date
May 8, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Waltham On The Wolds, England, United Kingdom
-
🧠 - Skills detailed
#Storage #"ETL (Extract #Transform #Load)" #Data Architecture #Data Processing #Programming #Data Integration #Data Storage #Spark (Apache Spark) #Big Data #Computer Science #PySpark #Data Pipeline #Azure #Documentation #Database Design #Python #ADF (Azure Data Factory) #Data Catalog #Data Engineering #Azure Data Factory #Databricks #Strategy
Role description
Job Title: Data Engineer
Location: Waltham Petcare Science Institute
Contract Length: 12 months
Working Hours: 40 hours per week
Randstad Sourceright, a leading provider of RPO & MSP Recruitment Services, is currently recruiting for a Data Engineer position. This role is essential for delivering projects in collaboration with IT partners and internal stakeholders to transform the product landscape and provide an infrastructure that supports the Petcare strategy of creating a better world for pets.
Job Purpose
The Data Engineer will design and implement robust data products and frameworks that meet evolving business needs. Working at the intersection of pet nutrition, health, and wellbeing, the successful candidate will bridge the gap between designed experiments and real-world data to support world-class science behind MARS Petcare brands.
Key Responsibilities
• Deliver data pipelines and products that support business operations and research.
• Provide clear documentation on technical specifications and maintain a deep understanding of solutions at the data and business process levels.
• Implement data engineering best practices to standardize the development process across the team.
• Design and maintain data integration frameworks for diverse sources, including practice management systems, lab systems, and registries.
• Design and implement data pipelines using Azure services (Azure Data Factory, Data storage), Spark, and Databricks.
• Lead data modelling, database design, and the design of enterprise data architecture.
• Utilize ETL/ELT frameworks and integration patterns with programming expertise in Python or PySpark.
• Process structured and semi-structured data, specifically focusing on life science data such as OMICS and health records.
• Ensure high-quality data is presented to researchers and maintain accurate data dictionaries within the data catalogue.
• Collaborate and communicate effectively with stakeholders and product managers to ensure project alignment.
Knowledge & Experience
The ideal candidate will possess:
• Educational Background: A degree in a relevant IT subject is required to ensure a foundational understanding of computer science principles.
• Azure Ecosystem Expertise: Proven expertise in designing and implementing data pipelines specifically using Azure Data Factory and Azure Data Storage.
• Big Data Processing: Advanced proficiency with Spark and Databricks for high-performance data processing.
• Programming Skills: Strong programming experience with Python or PySpark to build and manage ETL/ELT frameworks.
• Data Architecture & Design: Deep knowledge of enterprise data architecture, including database design and sophisticated data modelling.
• Integration Frameworks: Experience in designing data integration patterns that can pull from diverse sources like practice management systems and lab registries.
• Specialized Data Handling: Prior experience working with both structured and semi-structured data, with a strong preference for candidates who have handled life science data (e.g., omics or health records).
• Professional Track Record: A proven history of delivering data products and frameworks that directly meet complex business needs.
• Collaborative Skills: Ability to effectively communicate technical concepts to stakeholders and collaborate with product managers to align on business goals.
Job Title: Data Engineer
Location: Waltham Petcare Science Institute
Contract Length: 12 months
Working Hours: 40 hours per week
Randstad Sourceright, a leading provider of RPO & MSP Recruitment Services, is currently recruiting for a Data Engineer position. This role is essential for delivering projects in collaboration with IT partners and internal stakeholders to transform the product landscape and provide an infrastructure that supports the Petcare strategy of creating a better world for pets.
Job Purpose
The Data Engineer will design and implement robust data products and frameworks that meet evolving business needs. Working at the intersection of pet nutrition, health, and wellbeing, the successful candidate will bridge the gap between designed experiments and real-world data to support world-class science behind MARS Petcare brands.
Key Responsibilities
• Deliver data pipelines and products that support business operations and research.
• Provide clear documentation on technical specifications and maintain a deep understanding of solutions at the data and business process levels.
• Implement data engineering best practices to standardize the development process across the team.
• Design and maintain data integration frameworks for diverse sources, including practice management systems, lab systems, and registries.
• Design and implement data pipelines using Azure services (Azure Data Factory, Data storage), Spark, and Databricks.
• Lead data modelling, database design, and the design of enterprise data architecture.
• Utilize ETL/ELT frameworks and integration patterns with programming expertise in Python or PySpark.
• Process structured and semi-structured data, specifically focusing on life science data such as OMICS and health records.
• Ensure high-quality data is presented to researchers and maintain accurate data dictionaries within the data catalogue.
• Collaborate and communicate effectively with stakeholders and product managers to ensure project alignment.
Knowledge & Experience
The ideal candidate will possess:
• Educational Background: A degree in a relevant IT subject is required to ensure a foundational understanding of computer science principles.
• Azure Ecosystem Expertise: Proven expertise in designing and implementing data pipelines specifically using Azure Data Factory and Azure Data Storage.
• Big Data Processing: Advanced proficiency with Spark and Databricks for high-performance data processing.
• Programming Skills: Strong programming experience with Python or PySpark to build and manage ETL/ELT frameworks.
• Data Architecture & Design: Deep knowledge of enterprise data architecture, including database design and sophisticated data modelling.
• Integration Frameworks: Experience in designing data integration patterns that can pull from diverse sources like practice management systems and lab registries.
• Specialized Data Handling: Prior experience working with both structured and semi-structured data, with a strong preference for candidates who have handled life science data (e.g., omics or health records).
• Professional Track Record: A proven history of delivering data products and frameworks that directly meet complex business needs.
• Collaborative Skills: Ability to effectively communicate technical concepts to stakeholders and collaborate with product managers to align on business goals.






