

Randstad Digital
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in London (Hybrid – 3 days/week) for a 6-month contract. Key skills include Python, SQL, and Big Data technologies. Requires 7+ years of experience and a STEM or Business degree. Rate TBD.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 6, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#PySpark #Databases #"ETL (Extract #Transform #Load)" #MongoDB #Docker #Hadoop #Python #SQL (Structured Query Language) #Kafka (Apache Kafka) #Cloud #Security #Airflow #Statistics #Azure #Scala #Data Engineering #Java #Luigi #AWS (Amazon Web Services) #BI (Business Intelligence) #R #Tableau #SSIS (SQL Server Integration Services) #Databricks #Data Quality #Computer Science #Spark (Apache Spark) #Informatica #Linux #Neo4J #Big Data #GCP (Google Cloud Platform) #Bash #NoSQL
Role description
Data Engineer (Innovation Group)
Location: London (Hybrid – 3 days per week)
Duration: 6 Months Initial Contract
Rate: TBD
Are you a Data Engineer who thrives on building more than just standard ETL pipelines? Our Innovation Group is looking for a technical problem-solver to design, build, and codify the rules that power our next-generation data models.
The Role
You will focus on the end-to-end lifecycle of data, from the initial design of structured and unstructured models to the engineering of scalable pipelines. You will be responsible for the codification of complex business rules and the continuous evaluation of emerging technologies to enhance our group’s capabilities.
Key Responsibilities
• Design & Build: Develop sophisticated data models and map various data sources to support downstream analytics.
• Pipeline Engineering: Build scalable ETL pipelines and data quality solutions using tools like Airflow, Luigi, or Azkaban.
• Codification: Translate intricate business logic into high-quality code using Python, Scala, or Java.
• Innovation: Evaluate and implement new technologies across Cloud (Azure, AWS, GCP) and Big Data (Spark, Kafka) stacks.
• Collaboration: Work closely with business teams to define data flows and ensure security protocols (encryption/authentication) are met.
Technical Toolkit
• Languages: Proficiency in Python, R, or Java/Scala.
• Databases: Experience across Relational (SQL), NoSQL (MongoDB/Cassandra), and Graph (Neo4j).
• Big Data: Hands-on experience with Hadoop or Spark.
• Infrastructure: Comfortable with Linux/Bash, Docker, and Cloud IaaS/PaaS environments.
• Integrations: Familiarity with BI tools (PowerBI/Tableau) and integration platforms (SSIS/Informatica/SnapLogic).
What You’ll Need to Succeed
We are looking for a data professional who bridges the gap between complex business logic and high-performance engineering. Ideally, you bring at least 7+ years of experience in Data Engineering or Systems Analysis, backed by a Bachelor’s degree in a STEM or Business field—from Computer Science to Finance or Statistics. If you have a track record of building scalable pipelines and a passion for evaluating the next generation of data tech, we want to hear from you.
#DataEngineering#BigData#DataModeling#CloudComputing#InnovationJobs#Python#PySpark#Azure#AWS#SQL#Databricks#ETL#LondonJobs#HybridWorking#DataJobsLondon#TechHiringUK
Data Engineer (Innovation Group)
Location: London (Hybrid – 3 days per week)
Duration: 6 Months Initial Contract
Rate: TBD
Are you a Data Engineer who thrives on building more than just standard ETL pipelines? Our Innovation Group is looking for a technical problem-solver to design, build, and codify the rules that power our next-generation data models.
The Role
You will focus on the end-to-end lifecycle of data, from the initial design of structured and unstructured models to the engineering of scalable pipelines. You will be responsible for the codification of complex business rules and the continuous evaluation of emerging technologies to enhance our group’s capabilities.
Key Responsibilities
• Design & Build: Develop sophisticated data models and map various data sources to support downstream analytics.
• Pipeline Engineering: Build scalable ETL pipelines and data quality solutions using tools like Airflow, Luigi, or Azkaban.
• Codification: Translate intricate business logic into high-quality code using Python, Scala, or Java.
• Innovation: Evaluate and implement new technologies across Cloud (Azure, AWS, GCP) and Big Data (Spark, Kafka) stacks.
• Collaboration: Work closely with business teams to define data flows and ensure security protocols (encryption/authentication) are met.
Technical Toolkit
• Languages: Proficiency in Python, R, or Java/Scala.
• Databases: Experience across Relational (SQL), NoSQL (MongoDB/Cassandra), and Graph (Neo4j).
• Big Data: Hands-on experience with Hadoop or Spark.
• Infrastructure: Comfortable with Linux/Bash, Docker, and Cloud IaaS/PaaS environments.
• Integrations: Familiarity with BI tools (PowerBI/Tableau) and integration platforms (SSIS/Informatica/SnapLogic).
What You’ll Need to Succeed
We are looking for a data professional who bridges the gap between complex business logic and high-performance engineering. Ideally, you bring at least 7+ years of experience in Data Engineering or Systems Analysis, backed by a Bachelor’s degree in a STEM or Business field—from Computer Science to Finance or Statistics. If you have a track record of building scalable pipelines and a passion for evaluating the next generation of data tech, we want to hear from you.
#DataEngineering#BigData#DataModeling#CloudComputing#InnovationJobs#Python#PySpark#Azure#AWS#SQL#Databricks#ETL#LondonJobs#HybridWorking#DataJobsLondon#TechHiringUK






