Lorien

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 6-month contract, hybrid (2 days onsite), requiring active SC Clearance. Key skills include Azure services, SQL, Python, Spark, and CI/CD practices. Experience in data architecture and cloud migration is essential.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 5, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Yes
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#PySpark #GitHub #Apache Airflow #Data Quality #ADLS (Azure Data Lake Storage) #Shell Scripting #Migration #Cloud #Unix #Data Engineering #Data Pipeline #Azure ADLS (Azure Data Lake Storage) #Cloudera #ADF (Azure Data Factory) #Python #Linux #Spark (Apache Spark) #Strategy #Data Management #Agile #Scripting #Azure Databricks #Scala #Airflow #Data Architecture #Azure Data Factory #Delta Lake #Leadership #Databricks #Metadata #Azure DevOps #DevOps #"ETL (Extract #Transform #Load)" #Data Strategy #Observability #Azure #SQL (Structured Query Language) #Storage #Data Lake
Role description
Data Engineer 6 Month Contract Hybrid - 2 days per week onsite Please note: Active SC Clearance is required for this role A leading organisation within the UK financial sector is embarking on a major data transformation programme. They are looking for an experienced Data Engineer to help design and deliver a modern, cloud‑first data platform that will underpin some of the organisation’s most critical functions. Responsibilities: β€’ Design, build and deploy scalable, secure data solutions using Azure Databricks, Data Factory and Data Lake Storage. β€’ Develop and optimise advanced data pipelines with Python, SQL, Spark/PySpark and Delta Lake. β€’ Champion strong data quality, governance and observability practices. β€’ Modernise legacy systems and support large-scale migrations into Azure. β€’ Provide technical leadership, set engineering standards and contribute to architectural decisions. β€’ Mentor engineers and foster a culture of continuous improvement. β€’ Work closely with architecture, analytics and business teams to align engineering solutions with organisational needs. Minimum Criteria β€’ Extensive experience with Azure services including Azure Databricks, Azure Data Lake Storage, and Azure Data Factory. β€’ Advanced proficiency in SQL, Python, and Spark (PySpark), with a strong focus on performance optimization and distributed processing. β€’ Proven experience in CI/CD practices using industry-standard tools (e.g., GitHub Actions, Azure DevOps). β€’ Strong understanding of data architecture principles and cloud-native design patterns. Essential Criteria β€’ Demonstrated ability to lead technical delivery, mentor engineering teams and collaborate with stakeholders to ensure alignment between data solutions and business strategy. β€’ Proficiency in Linux/Unix environments and shell scripting. β€’ Deep understanding of source control, testing strategies, and agile development practices. β€’ Self-motivated with a strategic mindset and a passion for driving innovation in data engineering. Desirable Criteria β€’ Experience delivering data pipelines on Hortonworks/Cloudera on-prem and leading cloud migration initiatives. β€’ Familiarity with: - Apache Airflow - Data modelling and metadata management β€’ Experience influencing enterprise data strategy and contributing to architectural governance. To apply for this position please submit your CV.