
Freelance Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Freelance Senior Data Engineer, lasting 3 months, located in Chancery Lane (hybrid: 3 days onsite, 2 days remote). Requires 7+ years in data engineering, expertise in GCP, Snowflake, Databricks, and proficiency in Python and SQL.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
August 23, 2025
π - Project duration
3 to 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Data Quality #Data Pipeline #Data Engineering #Snowflake #Azure DevOps #Web Scraping #Cloud #Azure #Data Analysis #PostgreSQL #SQL (Structured Query Language) #Leadership #DevOps #PySpark #Spark (Apache Spark) #React #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #REST API #GCP (Google Cloud Platform) #Python #Monitoring #Batch #Databricks #JavaScript #S3 (Amazon Simple Storage Service) #REST (Representational State Transfer) #GIT #Redshift #Lambda (AWS Lambda) #ML (Machine Learning) #Spark SQL #SQL Server #Data Modeling #Scala #API (Application Programming Interface) #Data Governance #Oracle
Role description
Senior Data Engineer
Start - ASAP
Duration - 3 months
Rates - TBC
Location - Chancery Lane
Hybrid - 3 days onsite / 2 days remote
We are seeking a proactive and self-motivated Senior Data Engineer with a proven track record in building scalable cloud-based data solutions across multiple cloud platforms to support our work in architecting, building and maintaining the data infrastructure. The specific focus for this role will start with GCP however we require experience with Snowflake and Databricks also.
As a senior member within the data engineering space, you will play a pivotal role in designing scalable data pipelines, optimising data workflows, and ensuring data availability and quality for production technology.
The ideal candidate brings deep technical expertise in AWS, GCP and/or Databricks alongside essential hands-on experience building pipelines in Python, analysing data requirements with SQL, and modern data engineering practices. Your ability to work across business and technology functions, drive strategic initiatives, and independently problem solve will be key to success in this role.
Qualifications:
Experience:
β’ 7+ years of experience in data engineering and solution delivery, with a strong track record of technical leadership.
β’ Deep understanding of data modeling, data warehousing concepts, and distributed systems.
β’ Excellent problem-solving skills and ability to progress with design, build and validate output data independently.
β’ Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools.
β’ Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure.
β’ Strong background in database technologies (SQL Server, Redshift, PostgreSQL, Oracle).
Desirable Skills:
β’ Familiarity with machine learning pipelines and MLOps practices.
β’ Additional experience with Databricks and specific AWS such as Glue, S3, Lambda
β’ Proficient in Git, CI/CD pipelines, and DevOps tools (e.g., Azure DevOps)
β’ Hands-on experience with web scraping, REST API integrations, and streaming data pipelines.
β’ Knowledge of JavaScript and front-end frameworks (e.g., React)
Key Responsibilities:
β’ Architect and maintain robust data pipelines (batch and streaming) integrating internal and external data sources (APIs, structured streaming, message queues etc.).
β’ Collaborate with data analysts, scientists, and software engineers to understand data needs and develop solutions.
β’ Understand requirements from operations and product to ensure data and reporting needs are met
β’ Implement data quality checks, data governance practices, and monitoring systems to ensure reliable and trustworthy data.
β’ Optimize performance of ETL/ELT workflows and improve infrastructure scalability.
Senior Data Engineer
Start - ASAP
Duration - 3 months
Rates - TBC
Location - Chancery Lane
Hybrid - 3 days onsite / 2 days remote
We are seeking a proactive and self-motivated Senior Data Engineer with a proven track record in building scalable cloud-based data solutions across multiple cloud platforms to support our work in architecting, building and maintaining the data infrastructure. The specific focus for this role will start with GCP however we require experience with Snowflake and Databricks also.
As a senior member within the data engineering space, you will play a pivotal role in designing scalable data pipelines, optimising data workflows, and ensuring data availability and quality for production technology.
The ideal candidate brings deep technical expertise in AWS, GCP and/or Databricks alongside essential hands-on experience building pipelines in Python, analysing data requirements with SQL, and modern data engineering practices. Your ability to work across business and technology functions, drive strategic initiatives, and independently problem solve will be key to success in this role.
Qualifications:
Experience:
β’ 7+ years of experience in data engineering and solution delivery, with a strong track record of technical leadership.
β’ Deep understanding of data modeling, data warehousing concepts, and distributed systems.
β’ Excellent problem-solving skills and ability to progress with design, build and validate output data independently.
β’ Deep proficiency in Python (including PySpark), SQL, and cloud-based data engineering tools.
β’ Expertise in multiple cloud platforms (AWS, GCP, or Azure) and managing cloud-based data infrastructure.
β’ Strong background in database technologies (SQL Server, Redshift, PostgreSQL, Oracle).
Desirable Skills:
β’ Familiarity with machine learning pipelines and MLOps practices.
β’ Additional experience with Databricks and specific AWS such as Glue, S3, Lambda
β’ Proficient in Git, CI/CD pipelines, and DevOps tools (e.g., Azure DevOps)
β’ Hands-on experience with web scraping, REST API integrations, and streaming data pipelines.
β’ Knowledge of JavaScript and front-end frameworks (e.g., React)
Key Responsibilities:
β’ Architect and maintain robust data pipelines (batch and streaming) integrating internal and external data sources (APIs, structured streaming, message queues etc.).
β’ Collaborate with data analysts, scientists, and software engineers to understand data needs and develop solutions.
β’ Understand requirements from operations and product to ensure data and reporting needs are met
β’ Implement data quality checks, data governance practices, and monitoring systems to ensure reliable and trustworthy data.
β’ Optimize performance of ETL/ELT workflows and improve infrastructure scalability.