

Cpl Life Sciences
Data Engineer – Content Analytics
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer – Content Analytics, offering £400/day for a hybrid position in London, inside IR35. Key skills include building data pipelines, SQL proficiency, and experience with AWS tools. Strong data modeling and ETL/ELT background required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
400
-
🗓️ - Date
April 17, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Storage #Data Processing #Datasets #"ETL (Extract #Transform #Load)" #Programming #Data Engineering #Scala #Lambda (AWS Lambda) #Version Control #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #Data Pipeline #Big Data #Java #SQL (Structured Query Language) #Databases #Redshift #AWS (Amazon Web Services) #Batch #Data Quality #Data Storage #Data Transformations #Python #Data Framework
Role description
Data Engineer – Content Analytics
£400/day | Hybrid – London
Inside IR35
We’re looking for an experienced Data Engineer to join a high-performing Content Analytics & Products team, supporting the development of data-driven products and insights across the full content lifecycle.
This role sits at the heart of a growing data function, where you’ll design and build scalable data pipelines, enabling both customer-facing features and internal analytics.
What you’ll be doing
• Designing, building and optimising scalable data pipelines (batch & streaming)
• Ingesting and transforming data from multiple sources into high-quality datasets
• Developing robust data models to support analytics and reporting
• Implementing big data solutions using tools such as Spark, EMR, Redshift and Kinesis
• Building and maintaining data platforms using infrastructure-as-code (AWS CDK)
• Creating automated data quality frameworks to ensure accuracy and reliability
• Writing high-performance SQL for complex data transformations
• Partnering with business and engineering teams to deliver impactful data solutions
• Improving performance and cost efficiency across data processing and storage
What we’re looking for
• Strong experience building and operating data pipelines at scale
• Solid background in data modelling, warehousing and ETL/ELT
• Proficiency in SQL and at least one programming language (Python, Java, Scala, etc.)
• Experience working with large, complex datasets in distributed environments
• Good understanding of software engineering best practices (CI/CD, testing, version control)
Nice to have
• Experience with AWS tools (Redshift, S3, Glue, EMR, Kinesis, Lambda)
• Exposure to Spark or other big data frameworks
• Experience with non-relational databases or modern data storage solutions
If you are interested please apply or send your CV to luke.sandilands@cpl.com
Data Engineer – Content Analytics
£400/day | Hybrid – London
Inside IR35
We’re looking for an experienced Data Engineer to join a high-performing Content Analytics & Products team, supporting the development of data-driven products and insights across the full content lifecycle.
This role sits at the heart of a growing data function, where you’ll design and build scalable data pipelines, enabling both customer-facing features and internal analytics.
What you’ll be doing
• Designing, building and optimising scalable data pipelines (batch & streaming)
• Ingesting and transforming data from multiple sources into high-quality datasets
• Developing robust data models to support analytics and reporting
• Implementing big data solutions using tools such as Spark, EMR, Redshift and Kinesis
• Building and maintaining data platforms using infrastructure-as-code (AWS CDK)
• Creating automated data quality frameworks to ensure accuracy and reliability
• Writing high-performance SQL for complex data transformations
• Partnering with business and engineering teams to deliver impactful data solutions
• Improving performance and cost efficiency across data processing and storage
What we’re looking for
• Strong experience building and operating data pipelines at scale
• Solid background in data modelling, warehousing and ETL/ELT
• Proficiency in SQL and at least one programming language (Python, Java, Scala, etc.)
• Experience working with large, complex datasets in distributed environments
• Good understanding of software engineering best practices (CI/CD, testing, version control)
Nice to have
• Experience with AWS tools (Redshift, S3, Glue, EMR, Kinesis, Lambda)
• Exposure to Spark or other big data frameworks
• Experience with non-relational databases or modern data storage solutions
If you are interested please apply or send your CV to luke.sandilands@cpl.com






