Lorien

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with active SC Clearance, offering a 6-month hybrid contract. Key skills include Azure Databricks, SQL, Python, and Spark. Experience in CI/CD and cloud-native design is essential, with a focus on data transformation in the financial sector.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 14, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Yes
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Leadership #"ETL (Extract #Transform #Load)" #Azure Databricks #Agile #Cloudera #Data Quality #Scripting #Azure #Data Architecture #Migration #Data Engineering #PySpark #Cloud #Azure ADLS (Azure Data Lake Storage) #Unix #Spark (Apache Spark) #Metadata #ADF (Azure Data Factory) #Data Strategy #Azure Data Factory #SQL (Structured Query Language) #Observability #Data Lake #Strategy #DevOps #Linux #Shell Scripting #Storage #Apache Airflow #Python #Data Pipeline #Delta Lake #ADLS (Azure Data Lake Storage) #GitHub #Databricks #Data Management #Azure DevOps #Scala #Airflow
Role description
Data Engineer - SC Cleared 6 Month Contract Hybrid - 2 days per week onsite Please note: Active SC Clearance is required for this role A leading organisation within the UK financial sector is embarking on a major data transformation programme. They are looking for an experienced Data Engineer to help design and deliver a modern, cloud‑first data platform that will underpin some of the organisation’s most critical functions. Responsibilities: β€’ Design, build and deploy scalable, secure data solutions using Azure Databricks, Data Factory and Data Lake Storage. β€’ Develop and optimise advanced data pipelines with Python, SQL, Spark/PySpark and Delta Lake. β€’ Champion strong data quality, governance and observability practices. β€’ Modernise legacy systems and support large-scale migrations into Azure. β€’ Provide technical leadership, set engineering standards and contribute to architectural decisions. β€’ Mentor engineers and foster a culture of continuous improvement. β€’ Work closely with architecture, analytics and business teams to align engineering solutions with organisational needs. Minimum Criteria β€’ Extensive experience with Azure services including Azure Databricks, Azure Data Lake Storage, and Azure Data Factory. β€’ Advanced proficiency in SQL, Python, and Spark (PySpark), with a strong focus on performance optimization and distributed processing. β€’ Proven experience in CI/CD practices using industry-standard tools (e.g., GitHub Actions, Azure DevOps). β€’ Strong understanding of data architecture principles and cloud-native design patterns. Essential Criteria β€’ Demonstrated ability to lead technical delivery, mentor engineering teams and collaborate with stakeholders to ensure alignment between data solutions and business strategy. β€’ Proficiency in Linux/Unix environments and shell scripting. β€’ Deep understanding of source control, testing strategies, and agile development practices. β€’ Self-motivated with a strategic mindset and a passion for driving innovation in data engineering. Desirable Criteria β€’ Experience delivering data pipelines on Hortonworks/Cloudera on-prem and leading cloud migration initiatives. β€’ Familiarity with Apache Airflow, data modelling, and metadata management β€’ Experience influencing enterprise data strategy and contributing to architectural governance. To apply for this position please submit your CV.