

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer based in Bristol, hybrid (1 day/week on-site), offering £600/day inside IR35. Key skills include strong Python and PySpark expertise, experience with Azure Databricks, and familiarity with ETL processes and data governance.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
640
-
🗓️ - Date discovered
June 14, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Inside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Greater Bristol Area, United Kingdom
-
🧠 - Skills detailed
#Spark (Apache Spark) #Data Pipeline #PySpark #BI (Business Intelligence) #Data Governance #Python #Data Modeling #Monitoring #Data Engineering #Databricks #Cloud #Azure Databricks #Data Quality #Azure #"ETL (Extract #Transform #Load)" #Scala #AI (Artificial Intelligence) #Data Catalog #Data Ingestion
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
🚀 Data Engineer
📍 Bristol – Hybrid 1 – days per week on site
💰 £600 (neg) per day Inside IR35
🌟 What You’ll Be Doing
• Providing expertise newly formed Analytics & Data Team and support the team in building tools and data processes.
• Build the Backbone: Design and maintain scalable, reusable data pipelines that power analytics and AI initiatives.
• Bridge IT & Data: Lead the integration of systems to unlock new platform capabilities and deliver innovative features.
• Operationalize AI: Deploy production-ready AI models with automated monitoring—from data ingestion to model outputs.
• Master the ELT: Focus on Extract & Load processes across diverse enterprise data sources.
• Keep It Flowing: Monitor workflows, define SLIs, and set up alerts to ensure seamless data operations.
• Champion Governance: Apply best practices in data governance, maintaining data catalogues and dictionaries.
• Set the Standard: Develop and promote Python coding standards and drive continuous improvements in data quality.
• Model for Impact: Use dimensional data modeling to create robust data structures that support business intelligence.
• Drive a Data Culture: Empower teams with evidence-based insights and foster a data-driven decision-making environment.
What we’re looking
• Strong Python skills, especially with PySpark.
• Deep experience with Azure Databricks and cloud-based data platforms.
• A flexible, adaptive mindset and a collaborative spirit.
• Excellent communication skills—able to translate tech into plain English.
• A logical, analytical approach to problem-solving.
• Familiarity with the modern data stack and integration best practices.
• A passion for data quality, innovation, and continuous improvement.