Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer for a 6-month remote contract, starting ASAP. Key skills include SQL, Python, Databricks, and Azure. Experience with large-scale data transformations and cloud-native platforms is essential, particularly in data warehouse design and ETL processes.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
476.1904761905
-
πŸ—“οΈ - Date discovered
May 22, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
England, United Kingdom
-
🧠 - Skills detailed
#Data Lake #Data Warehouse #Apache Spark #Python #Agile #SQL (Structured Query Language) #Cloud #Data Migration #Data Processing #Data Lakehouse #Databricks #Migration #"ETL (Extract #Transform #Load)" #Scala #Spark (Apache Spark) #Data Engineering #Data Pipeline #Azure
Role description
Job Title: Senior Data Engineer Location: Remote Type: Contract (6 Months) Start Date: ASAP About the Role We are looking for an experienced Data Engineer to join a high-impact team working on a strategic enterprise data transformation. This role offers the opportunity to play a key part in the final delivery of a production-grade enterprise data platform, followed by deep involvement in a complex, innovative initiative. A future-facing, Fabric-like solution focused on scalability and advanced data processing. Key Responsibilities β€’ Lead and contribute to various phases of the data platform development cycle from finalising the current enterprise solution to initiating the next-gen data product. β€’ Migrate and transform large volumes of SQL-based data to cloud-native data platforms. β€’ Design and maintain scalable data pipelines using Databricks, Spark notebooks, and Azure-based tools. β€’ Collaborate closely with engineering leads and cross-functional teams to ensure smooth delivery across sprints. β€’ Contribute to architecture discussions and recommend best practices in data engineering, especially around data warehouse design, performance optimisation, and data modelling. β€’ Support the transition to an enterprise fabric solution involving heavy innovation and complex data workflows. Required Skills β€’ Strong background in SQL, Python and SQL-based data migration. β€’ Proven experience with Databricks, Apache Spark, and cloud data platforms (Azure preferred). β€’ Solid understanding of data warehouse architectures, ETL pipelines, and modern data lakehouse concepts. β€’ Experience working with enterprise-scale data platforms and handling large-scale transformations. Desirable Skills β€’ Prior exposure to Microsoft Fabric or similar modern data stack tools. β€’ Familiarity with complex data migration strategies and performance tuning. β€’ Experience in cross-functional delivery environments or agile data squads. If this sounds like something you are interested in, please get in contact: thomas.deakin@spgresourcing.com SPG Resourcing is an equal opportunities employer and is committed to fostering an inclusive workplace which values and benefits from the diversity of the workforce we hire. We offer reasonable accommodation at every stage of the application and interview process