
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a fixed-term contract exceeding 6 months, offering a competitive pay rate. Key skills required include Python, Apache Spark, and expertise in data warehouses like Snowflake or BigQuery.
π - Country
United Kingdom
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
August 12, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Unknown
-
π - Contract type
Fixed Term
-
π - Security clearance
Unknown
-
π - Location detailed
Leicester
-
π§ - Skills detailed
#Python #Compliance #Data Modeling #AI (Artificial Intelligence) #GCP (Google Cloud Platform) #Snowflake #Data Warehouse #Batch #Azure #Spark (Apache Spark) #Data Architecture #SQL (Structured Query Language) #EDW (Enterprise Data Warehouse) #Apache Spark #Data Quality #Airflow #Kafka (Apache Kafka) #AWS (Amazon Web Services) #BigQuery #Datasets #Data Engineering #Redshift #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #Synapse #dbt (data build tool) #Scala #Data Pipeline #DevOps #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Reference: Vrg_2425_091
Job title: Data Engineer
Role Overview
We are seeking an exceptional Data Engineer to design, develop, and scale data pipelines, warehouses, and streaming systems that power mission-critical analytics and AI workloads. This is a hands-on engineering role where you will work with cutting-edge technologies, tackle complex data challenges, and shape the organizationβs data architecture.
Core Responsibilities
Design & own robust ETL/ELT pipelines using Python and Apache Spark for batch and near real-time processing.
Architect and optimize enterprise data warehouses (Snowflake, BigQuery, Redshift, Azure Synapse) for performance and scalability.
Build data models and implement governance frameworks ensuring data quality, lineage, and compliance.
Engineer streaming data solutions using Kafka and Kinesis for real-time insights.
Collaborate cross-functionally to translate business needs into high-quality datasets and APIs.
Proactively monitor, tune, and troubleshoot pipelines for maximum reliability and cost efficiency.
Must-Have Expertise
5β9+ years in data engineering, with proven delivery of enterprise-scale solutions.
Advanced skills in Python, Apache Spark, and SQL optimization.
Deep expertise in at least one leading data warehouse (Snowflake, BigQuery, Redshift, Azure Synapse).
Strong knowledge of data modeling, governance, and compliance best practices.
Hands-on experience with streaming data technologies (Kafka, Kinesis).
Cloud proficiency in AWS, Azure, or GCP.
Preferred Qualifications
Experience with Airflow, dbt, or similar orchestration frameworks.
Exposure to DevOps & CI/CD in data environments.
Familiarity with ML pipelines and feature stores.
Job Types: Full-time, Fixed term contract
Application deadline: 10/09/2025Reference ID: Vrg2425091