

Signify Technology
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month contract, outside IR35, with an ASAP start. Requires hybrid work, expertise in Scala, Spark, ETL, and AWS, focusing on scalable data pipelines and Medallion architecture.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
November 24, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Outside IR35
-
π - Security
Unknown
-
π - Location detailed
United Kingdom
-
π§ - Skills detailed
#Scala #Databricks #AWS (Amazon Web Services) #Data Quality #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Observability #Data Engineering #Airflow #Data Pipeline #Spark (Apache Spark)
Role description
6 Month Contract, Outside IR35, ASAP Start Date, 2x times on site per week.
Weβre hiring a Data Engineer to join a large, community-driven marketplace used by millions of buyers and sellers every day. Youβll play a key role in modernising the data platform that supports recommendations, trust and safety, operational insights, and high-volume event processing.
This role is heavily focused on building scalable data pipelines, strengthening data quality, and supporting a shift toward a full lakehouse and Medallion architecture. Youβll be working closely with backend teams, so previous experience as a Scala developer is required.
]What youβll work on:
β’ Migrating existing SQL-based workflows into Spark ETL pipelines on Databricks
β’ Designing and evolving Medallion data models (bronze, silver, gold layers)
β’ Building and maintaining Airflow DAGs for orchestration
β’ Creating reliable ingestion and transformation pipelines across AWS
β’ Translating and integrating backend Scala models into data engineering workflows
β’ Improving performance, observability, and reliability of the data platform
6 Month Contract, Outside IR35, ASAP Start Date, 2x times on site per week.
Weβre hiring a Data Engineer to join a large, community-driven marketplace used by millions of buyers and sellers every day. Youβll play a key role in modernising the data platform that supports recommendations, trust and safety, operational insights, and high-volume event processing.
This role is heavily focused on building scalable data pipelines, strengthening data quality, and supporting a shift toward a full lakehouse and Medallion architecture. Youβll be working closely with backend teams, so previous experience as a Scala developer is required.
]What youβll work on:
β’ Migrating existing SQL-based workflows into Spark ETL pipelines on Databricks
β’ Designing and evolving Medallion data models (bronze, silver, gold layers)
β’ Building and maintaining Airflow DAGs for orchestration
β’ Creating reliable ingestion and transformation pipelines across AWS
β’ Translating and integrating backend Scala models into data engineering workflows
β’ Improving performance, observability, and reliability of the data platform






