

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a 6-month contract for a Data Engineer, offering £"pay rate" and remote work. Requires 3+ years in data engineering, strong SQL skills, and experience with tools like dbt and BigQuery. Key focus on modern data infrastructure.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
-
🗓️ - Date discovered
July 18, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Schema Design #Data Architecture #BigQuery #Python #Data Quality #Fivetran #Scala #Monitoring #Looker #Azure SQL #Data Engineering #"ETL (Extract #Transform #Load)" #dbt (data build tool) #BI (Business Intelligence) #Azure #SQL (Structured Query Language)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Contract - Data Engineer – Outside IR35 – 6 months
We're working with one of the UK’s fastest-growing DTC subscription companies, known for their strong brand, digital-first mindset, and mission-driven approach. With tens of thousands of active subscribers and an ambitious growth roadmap, they are investing in their data function to support smarter decisions and sustainable scale.
The role
This is a hands-on contract Data Engineer role focused on shaping and delivering a modern data infrastructure. You’ll take ownership of data architecture, ingestion pipelines, modelling, and tooling to support real-time business insight and scalable operations.
The immediate priority is leading the transition from a legacy Azure SQL setup to a more robust and scalable data stack, giving you the chance to shape structure, tooling, and impact from the ground up.
Responsibilities:
Data Infrastructure & Modelling
• Build and maintain scalable data models and warehouse structures
• Apply best practice modelling techniques (dimensional models, SCDs)
• Collaborate with engineers to align schema design and data quality upstream
• Own ingestion and transformation workflows across core systems
• Implement robust testing and monitoring for pipeline reliability
• Recommend and integrate modern tooling (dbt, Fivetran, BigQuery, Fabric, Metabase)
• Support dashboarding, KPI tracking, and enable self-serve reporting across teams
Requirements
• 3+ years in data engineering, analytics engineering, or BI development
• Strong SQL skills; Python is a plus
• Experience with modern data stack tools (dbt, Fivetran, BigQuery, Fabric, Looker, Metabase)
• Sound knowledge of data modelling concepts and warehousing best practices
• Comfortable owning end-to-end pipelines and operating autonomously
• Able to work across functions and translate data into business value
Why Apply?
• Join a high-growth, mission-driven business during a key scaling phase
• Own and influence the foundations of a modern data function
• Work remotely with flexible hours and quarterly meetups
• Make visible, high-impact contributions to product, growth, and operations