

LSA Recruit
Senior Data Engineer (SC Cleared
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (SC Cleared) on a contract basis for "X" months, paying "Y" per hour, located in London (Hybrid). Requires 10+ years in data engineering, Azure Databricks expertise, and Azure certifications.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 31, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
Greater London, England, United Kingdom
-
🧠 - Skills detailed
#Deployment #Monitoring #JavaScript #Data Visualisation #NoSQL #Azure SQL #R #Azure SQL Database #YAML (YAML Ain't Markup Language) #Version Control #DevOps #Agile #Big Data #Docker #Data Engineering #Forecasting #GIT #Metadata #Python #Spark SQL #Tableau #Data Transformations #Data Science #Compliance #Databricks #Scala #Jenkins #SQL (Structured Query Language) #Spark (Apache Spark) #Data Integration #Data Lineage #BI (Business Intelligence) #Cloud #Data Pipeline #Data Processing #Code Reviews #Storage #Azure Databricks #Automation #Azure #API (Application Programming Interface) #Azure Blob Storage #Delta Lake #Data Management #Kafka (Apache Kafka) #Datasets #Data Architecture #Data Lifecycle #Kubernetes #PySpark #Data Quality #ADF (Azure Data Factory) #Azure DevOps #"ETL (Extract #Transform #Load)" #Databases #Azure Event Hubs #Azure Data Factory #Microsoft Power BI #Data Governance
Role description
We have an exciting job opportunity for role Data Engineer based in London, UK (Hybrid)
Job Type: Contract (Inside IR 35)
Note: Active SC Clearence needed
Job Description:
Job Summary:
• We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities.
• The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing data transformations, and ensuring data quality and reliability.
• This role requires a strong understanding of data engineering principles, big data technologies, cloud computing (specifically Azure), and experience working with large datasets.
Key Responsibilities:
Data Pipeline Development & Optimisation:
• Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform.
• Optimise data pipelines for performance, efficiency, and cost-effectiveness.
• Implement data quality checks and validation rules within data pipelines.
Data Transformation & Processing:
• Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies.
• Develop and maintain data processing logic for cleaning, enriching, and aggregating data.
• Ensure data consistency and accuracy throughout the data lifecycle.
Azure Databricks Implementation:
• Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services.
• Implement best practices for Databricks development and deployment.
• Optimise Databricks workloads for performance and cost.
• Need to program using the languages such as SQL, Python, R, YAML and JavaScript
Data Integration:
• Integrate data from various sources, including relational databases, APIs, and streaming data sources.
• Implement data integration patterns and best practices.
• Work with API developers to ensure seamless data exchange.
Data Quality & Governance:
• Hands on experience to use Azure Purview for data quality and data governance
• Implement data quality monitoring and alerting processes.
• Work with data governance teams to ensure compliance with data governance policies and standards.
• Implement data lineage tracking and metadata management processes.
Collaboration & Communication:
• Collaborate closely with data scientists, economists, and other technical teams to understand data requirements and translate them into technical solutions.
• Communicate technical concepts effectively to both technical and non-technical audiences.
• Participate in code reviews and knowledge sharing sessions.
Automation & DevOps:
• Implement automation for data pipeline deployments and other data engineering tasks.
• Work with DevOps teams to implement and Build CI/CD pipelines, for environmental deployments.
• Promote and implement DevOps best practices.
Essential Skills & Experience:
• 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks.
• Strong proficiency in Python and Spark (PySpark) or Scala.
• Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns.
• Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and Azure SQL Database.
• Experience working with large datasets and complex data pipelines.
• Experience with data architecture design and data pipeline optimization.
• Proven expertise with Databricks, including hands-on implementation experience and certifications.
• Experience with SQL and NoSQL databases.
• Experience with data quality and data governance processes. Experience with version control systems (e.g., Git).
• Experience with Agile development methodologies.
• Excellent communication, interpersonal, and problem-solving skills.
• Experience with streaming data technologies (e.g., Kafka, Azure Event Hubs).
• Experience with data visualisation tools (e.g., Tableau, Power BI).
• Experience with DevOps tools and practices (e.g., Azure DevOps, Jenkins, Docker, Kubernetes).
• Experience working in a financial services or economic data environment.
• Azure certifications related to data engineering (e.g., Azure Data Engineer Associate).
We have an exciting job opportunity for role Data Engineer based in London, UK (Hybrid)
Job Type: Contract (Inside IR 35)
Note: Active SC Clearence needed
Job Description:
Job Summary:
• We are seeking a highly skilled and experienced Senior Data Engineer to join our team and contribute to the development and maintenance of our cutting-edge Azure Databricks platform for economic data. This platform is critical for our Monetary Analysis, Forecasting, and Modelling activities.
• The Senior Data Engineer will be responsible for building and optimising data pipelines, implementing data transformations, and ensuring data quality and reliability.
• This role requires a strong understanding of data engineering principles, big data technologies, cloud computing (specifically Azure), and experience working with large datasets.
Key Responsibilities:
Data Pipeline Development & Optimisation:
• Design, develop, and maintain robust and scalable data pipelines for ingesting, transforming, and loading data from various sources (e.g., APIs, databases, financial data providers) into the Azure Databricks platform.
• Optimise data pipelines for performance, efficiency, and cost-effectiveness.
• Implement data quality checks and validation rules within data pipelines.
Data Transformation & Processing:
• Implement complex data transformations using Spark (PySpark or Scala) and other relevant technologies.
• Develop and maintain data processing logic for cleaning, enriching, and aggregating data.
• Ensure data consistency and accuracy throughout the data lifecycle.
Azure Databricks Implementation:
• Work extensively with Azure Databricks Unity Catalog, including Delta Lake, Spark SQL, and other relevant services.
• Implement best practices for Databricks development and deployment.
• Optimise Databricks workloads for performance and cost.
• Need to program using the languages such as SQL, Python, R, YAML and JavaScript
Data Integration:
• Integrate data from various sources, including relational databases, APIs, and streaming data sources.
• Implement data integration patterns and best practices.
• Work with API developers to ensure seamless data exchange.
Data Quality & Governance:
• Hands on experience to use Azure Purview for data quality and data governance
• Implement data quality monitoring and alerting processes.
• Work with data governance teams to ensure compliance with data governance policies and standards.
• Implement data lineage tracking and metadata management processes.
Collaboration & Communication:
• Collaborate closely with data scientists, economists, and other technical teams to understand data requirements and translate them into technical solutions.
• Communicate technical concepts effectively to both technical and non-technical audiences.
• Participate in code reviews and knowledge sharing sessions.
Automation & DevOps:
• Implement automation for data pipeline deployments and other data engineering tasks.
• Work with DevOps teams to implement and Build CI/CD pipelines, for environmental deployments.
• Promote and implement DevOps best practices.
Essential Skills & Experience:
• 10+ years of experience in data engineering, with at least 3+ years of hands-on experience with Azure Databricks.
• Strong proficiency in Python and Spark (PySpark) or Scala.
• Deep understanding of data warehousing principles, data modelling techniques, and data integration patterns.
• Extensive experience with Azure data services, including Azure Data Factory, Azure Blob Storage, and Azure SQL Database.
• Experience working with large datasets and complex data pipelines.
• Experience with data architecture design and data pipeline optimization.
• Proven expertise with Databricks, including hands-on implementation experience and certifications.
• Experience with SQL and NoSQL databases.
• Experience with data quality and data governance processes. Experience with version control systems (e.g., Git).
• Experience with Agile development methodologies.
• Excellent communication, interpersonal, and problem-solving skills.
• Experience with streaming data technologies (e.g., Kafka, Azure Event Hubs).
• Experience with data visualisation tools (e.g., Tableau, Power BI).
• Experience with DevOps tools and practices (e.g., Azure DevOps, Jenkins, Docker, Kubernetes).
• Experience working in a financial services or economic data environment.
• Azure certifications related to data engineering (e.g., Azure Data Engineer Associate).






