

Alexander Associates Technical Recruitment
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a 6-month contract, paying £500-£540 per day, hybrid in London. Requires 10+ years of Data Engineering experience, Azure Databricks expertise, and security clearance. Key skills include Python, SQL, and data architecture.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
576
-
🗓️ - Date
November 29, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Data Processing #ADF (Azure Data Factory) #Cloud #Azure SQL Database #Azure SQL #Dimensional Modelling #NoSQL #Storage #Azure Databricks #Python #Version Control #"ETL (Extract #Transform #Load)" #Azure #Automated Testing #Azure Blob Storage #SQL (Structured Query Language) #Programming #Data Pipeline #GIT #Agile #Data Architecture #PySpark #Data Quality #Security #Databricks #Data Governance #Azure Data Factory #Spark (Apache Spark) #Scala #Data Integration #Data Engineering #Datasets
Role description
Senior Data Engineer
• 6 Month initial contract
• Hybrid role (up to 3 times a week in London)
• Start ASAP
• £500 - £540 per day Inside IR35
• Azure/Python/SQL/ELT experience
• Please note that this role requires security clearance due to the nature of the project
Essential Skills & Experience
• 10+ years’ experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions.
• Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications.
• Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures.
• Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the Azure ecosystem.
• Demonstrable experience designing, developing, and maintaining large-scale datasets and complex data pipelines in cloud environments.
• Proven capability in data architecture design, including the development and optimisation of end-to-end data pipelines for performance, reliability, and scalability.
• Expert-level knowledge of Databricks, including hands-on implementation, cluster management, performance tuning, and (ideally) relevant Databricks certifications.
• Hands-on experience with SQL and NoSQL database technologies, with strong query optimisation skills.
• Solid understanding of data quality frameworks, data governance practices, and implementing automated testing/validation within pipelines.
• Proficient with version control systems such as Git, including branching strategies and CI/CD integration.
• Experience working within Agile delivery environments, collaborating closely with cross-functional teams to deliver iterative, high-quality solutions.
Senior Data Engineer
• 6 Month initial contract
• Hybrid role (up to 3 times a week in London)
• Start ASAP
• £500 - £540 per day Inside IR35
• Azure/Python/SQL/ELT experience
• Please note that this role requires security clearance due to the nature of the project
Essential Skills & Experience
• 10+ years’ experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions.
• Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications.
• Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures.
• Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the Azure ecosystem.
• Demonstrable experience designing, developing, and maintaining large-scale datasets and complex data pipelines in cloud environments.
• Proven capability in data architecture design, including the development and optimisation of end-to-end data pipelines for performance, reliability, and scalability.
• Expert-level knowledge of Databricks, including hands-on implementation, cluster management, performance tuning, and (ideally) relevant Databricks certifications.
• Hands-on experience with SQL and NoSQL database technologies, with strong query optimisation skills.
• Solid understanding of data quality frameworks, data governance practices, and implementing automated testing/validation within pipelines.
• Proficient with version control systems such as Git, including branching strategies and CI/CD integration.
• Experience working within Agile delivery environments, collaborating closely with cross-functional teams to deliver iterative, high-quality solutions.






