

Airswift
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 12-month freelance contract in London (hybrid). Key skills include PySpark, Scala, and Databricks (certification required). Experience in building data pipelines and agile methodologies is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 3, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#PySpark #Compliance #Agile #Data Engineering #MLflow #Scala #Scrum #Spark (Apache Spark) #Data Pipeline #"ETL (Extract #Transform #Load)" #Python #ML (Machine Learning) #Observability #Documentation #Databricks
Role description
Job Title: Data Engineer
Duration: 12 months (with potential extension)
Workload: Full-time hours
Setup: Freelance (Daily rate / Limited Company / Umbrella / Sole Trader)
Location: London Hybrid – (3 days onsite, 2 days remote)
Overview:
We are seeking a highly skilled Data Engineer to design and deliver robust, high-performance data pipelines and enable advanced analytics for business-critical insights.
Key Responsibilities:
• Build and optimise reliable data pipelines for sourcing, processing, and storing data.
• Leverage internal data platforms and Databricks to transform data into actionable insights.
• Ensure stability, scalability, and compliance of data solutions.
• Collaborate within agile scrum teams to deliver high-quality products.
• Implement observability and troubleshoot production issues.
Must-Have Skills:
• Proficiency in PySpark, Scala, and Databricks (Databricks certification required).
• Strong communication skills for presentations and technical documentation.
• Proven experience delivering stable, high-quality pipelines.
• Ability to profile and optimise complex pipelines with critical thinking.
• Polyglot development experience (Python and Scala minimum).
Nice-to-Have:
• Exposure to MLFlow and machine learning models.
• Understanding of marketing domain data.
• Experience with event streaming and real-time analytics.
Job Title: Data Engineer
Duration: 12 months (with potential extension)
Workload: Full-time hours
Setup: Freelance (Daily rate / Limited Company / Umbrella / Sole Trader)
Location: London Hybrid – (3 days onsite, 2 days remote)
Overview:
We are seeking a highly skilled Data Engineer to design and deliver robust, high-performance data pipelines and enable advanced analytics for business-critical insights.
Key Responsibilities:
• Build and optimise reliable data pipelines for sourcing, processing, and storing data.
• Leverage internal data platforms and Databricks to transform data into actionable insights.
• Ensure stability, scalability, and compliance of data solutions.
• Collaborate within agile scrum teams to deliver high-quality products.
• Implement observability and troubleshoot production issues.
Must-Have Skills:
• Proficiency in PySpark, Scala, and Databricks (Databricks certification required).
• Strong communication skills for presentations and technical documentation.
• Proven experience delivering stable, high-quality pipelines.
• Ability to profile and optimise complex pipelines with critical thinking.
• Polyglot development experience (Python and Scala minimum).
Nice-to-Have:
• Exposure to MLFlow and machine learning models.
• Understanding of marketing domain data.
• Experience with event streaming and real-time analytics.






