Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, offering a 5-month contract at £450 - £500 per day, located in London or open to remote/hybrid work. Key skills include expertise in Snowflake, SQL, ETL/ELT pipelines, and AWS services.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
520
🗓️ - Date discovered
April 24, 2025
🕒 - Project duration
3 to 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
Inside IR35
🔒 - Security clearance
Unknown
📍 - Location detailed
United Kingdom
🧠 - Skills detailed
#Lambda (AWS Lambda) #Data Orchestration #Scala #Data Pipeline #EDW (Enterprise Data Warehouse) #BI (Business Intelligence) #Data Quality #Tableau #Data Integration #Anomaly Detection #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Cloud #AWS Glue #Data Governance #Airflow #Kafka (Apache Kafka) #Microsoft Power BI #Observability #SQL (Structured Query Language) #Snowflake #Clustering #Data Engineering #dbt (data build tool) #"ETL (Extract #Transform #Load)" #Indexing
Role description

Data Integration Specialist / Performance Data Engineer

Location: London / Open to both Remote & Hybrid

Rate: £450 - £500 per day (Inside IR35)

Contract: 5 months (potential to extend)

The Opportunity:

We're partnering with a globally recognised business to find a Senior Data Engineer with a passion for data performance, cloud architecture, and scalable integration.

This role sits at the heart of operational performance, enabling data-driven decisions through robust data pipelines, smart modelling, and seamless integrations. You'll play a critical part in modernising and scaling the data estate across multiple cloud-native tools and platforms.

What You'll Be Doing:

   • Build and optimise ETL/ELT pipelines across structured and unstructured data sources using Airbyte, Airflow, DBT Core, and AWS Glue

   • Design and maintain dimensional models in Snowflake, including SCDs and best practices for indexing, clustering, and performance

   • Collaborate cross-functionally with analysts and business teams to support Power BI and enterprise-wide self-serve analytics

   • Implement best practices in data governance, including data quality checks, lineage tracking, and anomaly detection

   • Automate data orchestration using tools such as Airflow, Lambda, or Step Functions

   • Support financial and operational reporting through snapshot tables and audit-friendly data structures

What We're Looking For:

   • Strong understanding of Enterprise Data Warehousing (EDW) with hands-on experience in Kimball-style modelling

   • Expert-level SQL skills for complex transformation and query tuning

   • Deep knowledge of Snowflake including optimisation, cost management, and architecture

   • Experience with modern data stacks – especially DBT Core, Airbyte, and Airflow

   • Familiarity with AWS data services (e.g., S3, Lambda, Step Functions)

   • Proven ability to support scalable reporting frameworks and drive data reliability

Bonus Points For:

   • Experience with data observability and CI/CD pipelines for data engineering

   • Exposure to streaming platforms like Kafka or Kinesis

   • Comfort working in fast-moving, cross-functional environments

   • BI tool experience – Power BI, Tableau, or similar