Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position, available as a full-time contract in London (hybrid), with an expected duration of over 6 months. Pay rate is competitive. Key skills include 5+ years of experience, expertise in Databricks, and strong knowledge of cloud platforms.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 25, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
Fixed Term
🔒 - Security clearance
Unknown
📍 - Location detailed
London, England, United Kingdom
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #BigQuery #Databricks #Storage #SQL (Structured Query Language) #Delta Lake #Data Architecture #Azure #Consulting #Data Quality #MLflow #Spark (Apache Spark) #Data Security #GCP (Google Cloud Platform) #Python #AWS (Amazon Web Services) #Cloud #Terraform #Apache Spark #Datasets #Data Governance #Consul #Security #Data Pipeline #Spark SQL #Data Science #Scala #Airflow #Snowflake #Data Engineering #Redshift
Role description

Position: Data Engineer

Employment Type: Contract, Full time

Start: ASAP

Location: London - Hybrid

Languages: English

Key skills:

   • 5+ years of Data Engineer.

   • Proven expertise in Databricks (including Delta Lake, Workflows, Unity Catalog).

   • Strong command of Apache Spark, SQL, and Python.

   • Hands-on experience with cloud platforms (AWS, Azure, or GCP).

   • Understanding of modern data architectures (e.g., Lakehouse, ELT/ETL pipelines).

   • Familiarity with CI/CD pipelines and infrastructure-as-code tools (Terraform is a plus).

   • Experience with Airflow or similar orchestration tools.

   • Familiarity with MLflow or MLOps practices.

   • Knowledge of data warehousing solutions (Snowflake, Redshift, BigQuery).

   • Consulting background is a plus.

   • Strong communication skills (oral & written)

   • Rights to work in the UK is must (No Sponsorship available)

Responsibilities:

   • Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark.

   • Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets.

   • Optimize data workflows and storage (Delta Lake, Lakehouse architecture).

   • Manage and monitor data pipelines in cloud environments (AWS, Azure, or GCP).

   • Work with structured and unstructured data across multiple sources.

   • Implement best practices in data governance, data security, and data quality.

   • Automate workflows and data validation tasks using Python, SQL, and Databricks notebooks.

Should you be interested in being considered for this position and would like to discuss further.

Please apply with your latest CV or share your CV directly with me at christophe.ramen@focusonsap.org