

Crimson
Data Engineer - SC Cleared
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with active SC Clearance, offering up to £520/day for a contract in London. Key skills include Azure Databricks, ETL, Spark, Python, and SQL. Hybrid work requires 3 days on-site weekly.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
520
-
🗓️ - Date
November 21, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Yes
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Code Reviews #Data Engineering #JSON (JavaScript Object Notation) #Python #Deployment #"ETL (Extract #Transform #Load)" #Azure Databricks #Big Data #Spark SQL #Delta Lake #Data Quality #Databricks #Cloud #R #Azure #Data Cleaning #JavaScript #Spark (Apache Spark) #YAML (YAML Ain't Markup Language) #PySpark #SQL (Structured Query Language) #Scala #DevOps #Databases #Data Science #Data Transformations #Data Pipeline
Role description
Data Engineer – Azure Databricks/ SC Clearance – Contract
Active SC Clearance is required for this position
Hybrid working – 3 days / week on site required
Up to £520 / day – Inside IR35
We are currently recruiting for a well experienced Data Engineer, required for a leading global transformation consultancy, based in London. The Lead Technical Developer will be responsible for building and maintaining data pipelines, implementing data transformations, and ensuring data quality. This role requires a solid understanding of data engineering principles, experience with big data technologies, familiarity with cloud computing (specifically Azure), and a passion for working with data.
Key skills and responsibilities:
• Design, build, and maintain scalable ETL pipelines to ingest, transform, and load data from diverse sources (APIs, databases, files) into Azure Databricks.
• Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency.
• Utilize Unity Catalog, Delta Lake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript.
• Integrate data from multiple sources and formats (CSV, JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting.
• Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues, and enhance data quality processes.
• Work closely with engineers, data scientists, and stakeholders. Participate in code reviews and clearly communicate technical concepts.
• Develop CI/CD pipelines for deployments and automate data engineering workflows using DevOps principles.
Interested? Please submit your updated CV to Lewis Rushton at Crimson for immediate consideration.
Not interested? Do you know someone who might be a perfect fit for this role? Refer a friend and earn £250 worth of vouchers!
Data Engineer – Azure Databricks/ SC Clearance – Contract
Active SC Clearance is required for this position
Hybrid working – 3 days / week on site required
Up to £520 / day – Inside IR35
We are currently recruiting for a well experienced Data Engineer, required for a leading global transformation consultancy, based in London. The Lead Technical Developer will be responsible for building and maintaining data pipelines, implementing data transformations, and ensuring data quality. This role requires a solid understanding of data engineering principles, experience with big data technologies, familiarity with cloud computing (specifically Azure), and a passion for working with data.
Key skills and responsibilities:
• Design, build, and maintain scalable ETL pipelines to ingest, transform, and load data from diverse sources (APIs, databases, files) into Azure Databricks.
• Implement data cleaning, validation, and enrichment using Spark (PySpark/Scala) and related tools to ensure quality and consistency.
• Utilize Unity Catalog, Delta Lake, Spark SQL, and best practices for Databricks development, optimization, and deployment. Program in SQL, Python, R, YAML, and JavaScript.
• Integrate data from multiple sources and formats (CSV, JSON, Parquet, Delta) for downstream analytics, dashboards, and reporting.
• Apply Azure Purview for governance and quality checks. Monitor pipelines, resolve issues, and enhance data quality processes.
• Work closely with engineers, data scientists, and stakeholders. Participate in code reviews and clearly communicate technical concepts.
• Develop CI/CD pipelines for deployments and automate data engineering workflows using DevOps principles.
Interested? Please submit your updated CV to Lewis Rushton at Crimson for immediate consideration.
Not interested? Do you know someone who might be a perfect fit for this role? Refer a friend and earn £250 worth of vouchers!






