

Senior Data Engineer | Freelance
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Freelance) in London, hybrid (2-3 days/week in office), lasting 6-9 months at £450-£500 per day. Requires 5-6+ years in data engineering, Azure Databricks, PySpark, and familiarity with Terraform/Kubernetes.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
500
-
🗓️ - Date discovered
July 10, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Governance #Kubernetes #Terraform #Databricks #Azure Databricks #Data Engineering #Infrastructure as Code (IaC) #Batch #DevOps #Agile #Data Processing #Azure #Spark (Apache Spark) #Deployment #Microsoft Azure #Data Pipeline #PySpark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Location: London (Hybrid, 2–3 days/week in office - located in East India Doc, next to Canary Wharf)
Duration: 6–9 months - Outside of IR35
Start date: ASAP
Budget: £450-£500 per day
Elsewhen is a London based consultancy designing digital products & services for the likes of Spotify, Inmarsat and Zego. Over the past 11 years, Elsewhen has created a working environment that is impactful, driven, open and friendly. We value outcomes over hours and agility over rigid processes.
Join the team — https://www.elsewhen.com.
About the Role:
We’re looking for an experienced, self-driven Senior Data Engineer to join a fast-paced
project building a new product on a Microsoft Azure-based platform. You’ll work
closely with an established team to design and deliver robust data pipelines and
infrastructure using modern tools and best practices.
Key Responsibilities:
• Design, develop, and maintain data pipelines using Azure Databricks and PySpark.
• Build and manage data processing workflows for real-time and batch data.
• Collaborate with DevOps to deploy solutions using CI/CD pipelines.
• Implement infrastructure as code with Terraform and container orchestration
• with Kubernetes (where required).
• Work independently with minimal supervision in a time-sensitive environment.
• Align deliverables with the existing team’s tech stack and contribute to rapid scaling of the product.
Key Requirements:
• 5–6+ years of hands-on experience in data engineering.
• Strong practical experience with Microsoft Azure, especially Azure Databricks.
• Solid working knowledge of PySpark.
• Familiarity with Terraform and/or Kubernetes (does not need to be expert level).
• Good understanding of CI/CD pipelines and best practices for deployment.
• Experience delivering both real-time and batch data processing solutions.
Nice-to-Have:
• Broader Azure ecosystem knowledge.
• Familiarity with data governance and quality best practices.
• Exposure to collaborative agile environments and working with distributed teams.
Ideal Candidate:
• Independent & Proactive: Able to work with minimal coaching and direction.
• Fast-Paced: Thrives under tight timelines with high-quality output.
• Strong communicator and problem-solver.
• Comfortable joining a product team that is scaling rapidly and iterating.
Our Commitment to Diversity:
Diverse thoughts, backgrounds, and perspectives create stronger teams and better technology. We welcome everyone, regardless of culture, appearance, or perspective, fostering individuality. We empower our team to challenge norms, grow ideas, and produce their best work.