

Data Engineer
Position: Data Engineer
Employment Type: Contract, Full time
Start: ASAP
Location: London - Hybrid
Languages: English
Key skills:
• 5+ years of Data Engineer.
• Proven expertise in Databricks (including Delta Lake, Workflows, Unity Catalog).
• Strong command of Apache Spark, SQL, and Python.
• Hands-on experience with cloud platforms (AWS, Azure, or GCP).
• Understanding of modern data architectures (e.g., Lakehouse, ELT/ETL pipelines).
• Familiarity with CI/CD pipelines and infrastructure-as-code tools (Terraform is a plus).
• Experience with Airflow or similar orchestration tools.
• Familiarity with MLflow or MLOps practices.
• Knowledge of data warehousing solutions (Snowflake, Redshift, BigQuery).
• Consulting background is a plus.
• Strong communication skills (oral & written)
• Rights to work in the UK is must (No Sponsorship available)
Responsibilities:
• Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark.
• Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets.
• Optimize data workflows and storage (Delta Lake, Lakehouse architecture).
• Manage and monitor data pipelines in cloud environments (AWS, Azure, or GCP).
• Work with structured and unstructured data across multiple sources.
• Implement best practices in data governance, data security, and data quality.
• Automate workflows and data validation tasks using Python, SQL, and Databricks notebooks.
Should you be interested in being considered for this position and would like to discuss further.
Please apply with your latest CV or share your CV directly with me at christophe.ramen@focusonsap.org
Position: Data Engineer
Employment Type: Contract, Full time
Start: ASAP
Location: London - Hybrid
Languages: English
Key skills:
• 5+ years of Data Engineer.
• Proven expertise in Databricks (including Delta Lake, Workflows, Unity Catalog).
• Strong command of Apache Spark, SQL, and Python.
• Hands-on experience with cloud platforms (AWS, Azure, or GCP).
• Understanding of modern data architectures (e.g., Lakehouse, ELT/ETL pipelines).
• Familiarity with CI/CD pipelines and infrastructure-as-code tools (Terraform is a plus).
• Experience with Airflow or similar orchestration tools.
• Familiarity with MLflow or MLOps practices.
• Knowledge of data warehousing solutions (Snowflake, Redshift, BigQuery).
• Consulting background is a plus.
• Strong communication skills (oral & written)
• Rights to work in the UK is must (No Sponsorship available)
Responsibilities:
• Design, build, and maintain scalable and efficient data pipelines using Databricks and Apache Spark.
• Collaborate with Data Scientists, Analysts, and Product teams to understand data needs and deliver clean, reliable datasets.
• Optimize data workflows and storage (Delta Lake, Lakehouse architecture).
• Manage and monitor data pipelines in cloud environments (AWS, Azure, or GCP).
• Work with structured and unstructured data across multiple sources.
• Implement best practices in data governance, data security, and data quality.
• Automate workflows and data validation tasks using Python, SQL, and Databricks notebooks.
Should you be interested in being considered for this position and would like to discuss further.
Please apply with your latest CV or share your CV directly with me at christophe.ramen@focusonsap.org