Sparta Global

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer for a 12-month contract, requiring 3 days on-site in Blackpool. Key skills include mentoring, Python or Scala, advanced SQL, ETL/ELT experience, and familiarity with data warehouse platforms like Snowflake or BigQuery.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
May 2, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Blackpool, England, United Kingdom
-
🧠 - Skills detailed
#Deployment #Storage #Scala #Security #Snowflake #Data Integrity #Data Engineering #Python #Data Pipeline #Leadership #SQL (Structured Query Language) #Database Design #BI (Business Intelligence) #Redshift #Datasets #Data Warehouse #BigQuery #Airflow #Data Processing #Data Architecture #Batch #"ETL (Extract #Transform #Load)"
Role description
Sparta Global are hiring a Senior Data Engineer - this is a role that combines hands-on engineering with technical leadership, including mentoring junior engineers and driving best practice across an organisation's data engineering function. This is a 12-Month Fixed Term Contract, requiring 3 days a week on-site in Blackpool. Role Summary: Designing, building, and optimising scalable data pipelines that support analytics, reporting, and operational use cases. You will play a key role in shaping the organisation’s data architecture, ensuring data is reliable, accessible, and performant across multiple systems, while helping to manage a team of junior data engineers. Key Responsibilities: β€’ Mentor junior data engineers and support their technical development, while promoting best practices in their coding, testing, and deployment. β€’ Design and implement robust, scalable data pipelines (batch and real-time) to ingest, transform, and load data from multiple sources β€’ Build and maintain ETL/ELT processes to support analytics and BI platforms β€’ Identify and reuse existing data flows where appropriate to improve efficiency and reduce duplication β€’ Ensure data pipelines are fault-tolerant, observable, and maintainable β€’ Lead the development of streaming data solutions for real-time data processing β€’ Design and implement data models optimised for analytics workloads (e.g. star/snowflake schemas) β€’ Work with modern data warehouse platforms (e.g. Snowflake, BigQuery, Redshift or similar) β€’ Identify bottlenecks and implement improvements across storage, compute, and transformation layers β€’ Ensure efficient handling of large-scale datasets β€’ Lead work on database design, optimisation, and maintenance β€’ Ensure data integrity, security, and governance standards are met Key Skills & Experience: β€’ Previous experience mentoring junior engineers is a must-have β€’ Strong experience with Python or Scala for data processing β€’ Advanced SQL skills with experience handling large datasets β€’ Experience building and maintaining ETL/ELT pipelines in production environments β€’ Hands-on experience with workflow orchestration tools (e.g. Airflow)