Vivo Talent Solutions

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position based in "London OR Newcastle," offered as a hybrid contract. Key skills required include ETL workflow design, data integration, and data modelling. Strong experience with relational and non-relational databases is essential.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 24, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Automation #Data Governance #Data Pipeline #Data Quality #Data Ingestion #Documentation #Data Engineering #Datasets #Scala #Security #Data Architecture #Cloud #"ETL (Extract #Transform #Load)" #Data Analysis #Databases
Role description
Data Engineer / London OR Newcastle / Hybrid / Contract We’re recruiting a Data Engineer to join a growing data function, playing a key role in designing, building, and maintaining scalable data infrastructure that supports analytics, insight, and automation across the organisation. This is an opportunity to work on modern data platforms, integrating multiple data sources and enabling high-quality, accessible data for downstream users. The Role As a Data Engineer, you will: β€’ Design, build, and maintain scalable data pipelines and ETL processes β€’ Integrate structured and unstructured data from multiple internal and external sources β€’ Ensure data quality, consistency, performance, and security across platforms β€’ Collaborate with analytics engineers, data analysts, and stakeholders to support data modelling and transformation β€’ Monitor, optimise, and troubleshoot data infrastructure and pipelines β€’ Produce clear documentation of data architecture and engineering processes Key Responsibilities β€’ Build and manage robust data infrastructure for large-scale data ingestion and processing β€’ Develop automated, reliable data pipelines aligned to best practices β€’ Optimise ETL workflows for performance and scalability β€’ Implement data governance, access controls, and security standards β€’ Support self-service analytics by enabling clean, well-structured datasets β€’ Proactively identify and resolve data issues and pipeline failures Skills & Experience β€’ Strong experience designing and building ETL workflows and data pipelines β€’ Experience integrating data from APIs, databases, and cloud services β€’ Solid understanding of data modelling and data warehousing concepts β€’ Experience with relational and non-relational databases, including performance optimisation If you’re interested in the role, then please apply!