

Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (contract, 6 months) offering £450/day, remote (UK-based). Key skills include Azure, Databricks, Event Hubs, CI/CD, and DevOps. Strong experience in Azure environments and real-time data processing is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
440
-
🗓️ - Date discovered
August 2, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
England, United Kingdom
-
🧠 - Skills detailed
#Azure Event Hubs #GIT #Infrastructure as Code (IaC) #Agile #Azure #Delta Lake #Data Pipeline #Version Control #Automation #DevOps #Data Processing #Scala #"ETL (Extract #Transform #Load)" #Monitoring #Cloud #ADF (Azure Data Factory) #Data Engineering #Azure Databricks #Deployment #Microsoft Azure #Spark (Apache Spark) #Databricks #Azure Data Factory #Batch
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Data Engineer – Azure, Databricks, Event Hubs
Remote / UK-based – £450, Outside IR35
You will be joining a major programme of work for our customer, supporting key data initiatives central to their digital transformation and business objectives. The work will involve building modern, scalable data pipelines and supporting real-time data processing using Microsoft Azure technologies. Full project details will be shared upon interview.
Key Responsibilities:
• Design, develop, & maintain scalable data pipelines using Azure Data Factory & Databricks
• Strong experience as a Data Engineer in Azure environments
• Experience with Azure Event Hubs or similar real-time streaming platforms
• Solid understanding of DevOps principles; building & maintaining CI/CD pipelines
• Familiarity with version control systems like Git and deployment automation tools
• Strong stakeholder communication and ability to work in a collaborative, agile environment
• Exposure to Delta Lake, Spark, or other large-scale data processing frameworks
• Enable real-time and batch data processing using Azure Event Hubs
• Implement and manage CI/CD pipelines and support DevOps practices for deployment and monitoring of data solutions
• Collaborate with cross-functional teams to deliver robust data engineering solutions aligned with project goals
• Apply best practices in version control, testing, and Infrastructure As Code within a cloud environment
• Contribute to a culture of continuous improvement and engineering excellence
If you are available and interested in this opportunity, please apply for further information. Please note due to high volumes of applications we are unable to contact every application.
If you do not hear back from us within 7 days of sending your application, please assume that you have not been successful on this occasion.
At Lucid, we celebrate difference and value diverse perspectives, underpinned by our values ‘Honesty, Integrity and Pragmatism’. We are proud to provide equal opportunities in line with our Diversity and Inclusion policy and welcome applications from all suitably qualified or experienced people, regardless of personal characteristics.
If you have a disability or health condition and seek support throughout the recruitment process, please do not hesitate to contact us via the details below