

Crimson
Senior Data Engineer – Azure Databricks/ SC Clearance – Contract
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with active SC Clearance, offering a contract of unspecified length at up to £620/day, hybrid location in London. Key skills include Azure Databricks, ETL, Spark, and data pipeline optimization.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
620
-
🗓️ - Date
November 21, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Code Reviews #Data Engineering #Python #Deployment #"ETL (Extract #Transform #Load)" #Azure Databricks #Big Data #Spark SQL #Delta Lake #Data Quality #Databricks #Metadata #Cloud #Agile #R #API (Application Programming Interface) #Azure #JavaScript #Spark (Apache Spark) #YAML (YAML Ain't Markup Language) #PySpark #Compliance #SQL (Structured Query Language) #Scala #DevOps #Automation #Databases #Monitoring #Data Lifecycle #Data Science #Data Management #Data Transformations #Data Pipeline #Datasets
Role description
Senior Data Engineer – Azure Databricks/ SC Clearance – Contract
Active SC Clearance is required for this position
Hybrid working – 3 days / week on site required
Up to £620 / day – Inside IR35
We are currently recruiting for a well experienced Senior Data Engineer, required for a leading global transformation consultancy, based in London. The Senior Data Engineer will be responsible for providing hands-on, technical expertise within an agile team. The ideal candidate will be well experienced in building and optimising data pipelines, implementing data transformations, and ensuring data quality and reliability. This role requires a strong understanding of data engineering principles, big data technologies, cloud computing (specifically Azure), and experience working with large datasets
Key skills and responsibilities:
• Design, build, and maintain scalable ETL pipelines for ingesting, transforming, and loading data from APIs, databases, and financial data sources into Azure Databricks. Optimize pipelines for performance, reliability, and cost, incorporating data quality checks.
• Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle.
• Work extensively with Unity Catalog, Delta Lake, Spark SQL, and related services. Apply best practices for development, deployment, and workload optimization. Program in SQL, Python, R, YAML, and JavaScript.
• Integrate data from relational databases, APIs, and streaming sources using best-practice patterns. Collaborate with API developers for seamless data exchange.
• Utilize Azure Purview for governance and quality monitoring. Implement lineage tracking, metadata management, and compliance with governance standards.
• Partner with data scientists, economists, and technical teams to translate requirements into solutions. Communicate effectively with technical and non-technical audiences. Participate in code reviews and knowledge-sharing sessions.
• Automate pipeline deployments and implement CI/CD processes in collaboration with DevOps teams. Promote best practices for automation and environment management.
Interested? Please submit your updated CV to Lewis Rushton at Crimson for immediate consideration.
Not interested? Do you know someone who might be a perfect fit for this role? Refer a friend and earn £250 worth of vouchers!
Senior Data Engineer – Azure Databricks/ SC Clearance – Contract
Active SC Clearance is required for this position
Hybrid working – 3 days / week on site required
Up to £620 / day – Inside IR35
We are currently recruiting for a well experienced Senior Data Engineer, required for a leading global transformation consultancy, based in London. The Senior Data Engineer will be responsible for providing hands-on, technical expertise within an agile team. The ideal candidate will be well experienced in building and optimising data pipelines, implementing data transformations, and ensuring data quality and reliability. This role requires a strong understanding of data engineering principles, big data technologies, cloud computing (specifically Azure), and experience working with large datasets
Key skills and responsibilities:
• Design, build, and maintain scalable ETL pipelines for ingesting, transforming, and loading data from APIs, databases, and financial data sources into Azure Databricks. Optimize pipelines for performance, reliability, and cost, incorporating data quality checks.
• Develop complex transformations and processing logic using Spark (PySpark/Scala) for cleaning, enrichment, and aggregation, ensuring accuracy and consistency across the data lifecycle.
• Work extensively with Unity Catalog, Delta Lake, Spark SQL, and related services. Apply best practices for development, deployment, and workload optimization. Program in SQL, Python, R, YAML, and JavaScript.
• Integrate data from relational databases, APIs, and streaming sources using best-practice patterns. Collaborate with API developers for seamless data exchange.
• Utilize Azure Purview for governance and quality monitoring. Implement lineage tracking, metadata management, and compliance with governance standards.
• Partner with data scientists, economists, and technical teams to translate requirements into solutions. Communicate effectively with technical and non-technical audiences. Participate in code reviews and knowledge-sharing sessions.
• Automate pipeline deployments and implement CI/CD processes in collaboration with DevOps teams. Promote best practices for automation and environment management.
Interested? Please submit your updated CV to Lewis Rushton at Crimson for immediate consideration.
Not interested? Do you know someone who might be a perfect fit for this role? Refer a friend and earn £250 worth of vouchers!






