

Data Engineer - AWS, Databricks & Pyspark
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position focused on AWS, Databricks, and PySpark, offering a 6-month contract at £350 per day. It requires expertise in ETL, cloud data warehousing, and collaboration with analytics teams, with hybrid work in Harrow, London.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
350
-
🗓️ - Date discovered
July 26, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
City Of London, England, United Kingdom
-
🧠 - Skills detailed
#Data Engineering #Data Science #GIT #Migration #PySpark #Cloud #Databricks #DevOps #Datasets #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Scala #AWS (Amazon Web Services) #Delta Lake #Data Governance
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Data Engineer - AWS, Databricks & Pyspark
Contract Role - Data Engineer
Location: Hybrid (1 day per month onsite in Harrow, London)
Rate: £350 per day (Outside IR35)
Duration: 6 months
A client of mine is looking for a Data Engineer to help maintain and enhance their existing cloud-based data platform. The core migration to a Databricks Delta Lakehouse on AWS has already been completed, so the focus will be on improving pipeline performance, supporting analytics, and contributing to ongoing platform development.
Key Responsibilities:
- Maintain and optimise existing ETL pipelines to support reporting and analytics
- Assist with improvements to performance, scalability, and cost-efficiency across the platform
- Work within the existing Databricks environment to develop new data solutions as required
- Collaborate with analysts, data scientists, and business stakeholders to deliver clean, usable datasets
- Contribute to good data governance, CI/CD workflows, and engineering standards
- Continue developing your skills in PySpark, Databricks, and AWS-based tools
Tech Stack Includes:
- Databricks (Delta Lake, PySpark)
- AWS
- CI/CD tooling (Git, DevOps pipeline
- Cloud-based data warehousing and analytics tools
If your a mid to snr level Data Engineer feel free to apply or send your C.V
Data Engineer - AWS, Databricks & Pyspark