Alexander Associates Technical Recruitment

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month contract, hybrid in London, paying £550 - £575 per day. Requires 10+ years in Data Engineering, 3+ years with Azure Databricks, and security clearance due to project nature.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
608
-
🗓️ - Date
December 12, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#NoSQL #GIT #Scala #Data Governance #Data Pipeline #Automated Testing #Spark (Apache Spark) #Programming #Data Integration #Datasets #Azure Blob Storage #Data Quality #Python #Databricks #Azure Data Factory #"ETL (Extract #Transform #Load)" #Agile #Azure Databricks #Dimensional Modelling #SQL (Structured Query Language) #Azure SQL #Storage #Data Architecture #Cloud #PySpark #Security #Azure #ADF (Azure Data Factory) #Version Control #Data Engineering #Azure SQL Database #Data Processing
Role description
Senior Data Engineer 6 Month initial contract Hybrid role (up to 3 times a week in London) Start ASAP £550 - £575 per day Inside IR35 Azure/Python/SQL/ELT experience • Please note that this role requires security clearance due to the nature of the project Essential Skills & Experience • 10+ years’ experience in Data Engineering, with a minimum of 3 years of hands-on Azure Databricks experience delivering production-grade solutions. • Strong programming proficiency in Python and Spark (PySpark) or Scala, with the ability to build scalable and efficient data processing applications. • Advanced understanding of data warehousing concepts, including dimensional modelling, ETL/ELT patterns, and modern data integration architectures. • Extensive experience working with Azure data services, particularly Azure Data Factory, Azure Blob Storage, Azure SQL Database, and related components within the Azure ecosystem. • Demonstrable experience designing, developing, and maintaining large-scale datasets and complex data pipelines in cloud environments. • Proven capability in data architecture design, including the development and optimisation of end-to-end data pipelines for performance, reliability, and scalability. • Expert-level knowledge of Databricks, including hands-on implementation, cluster management, performance tuning, and (ideally) relevant Databricks certifications. • Hands-on experience with SQL and NoSQL database technologies, with strong query optimisation skills. • Solid understanding of data quality frameworks, data governance practices, and implementing automated testing/validation within pipelines. • Proficient with version control systems such as Git, including branching strategies and CI/CD integration. • Experience working within Agile delivery environments, collaborating closely with cross-functional teams to deliver iterative, high-quality solutions.