

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in People Data on a 3-month contract, hybrid in Cambridge. Key skills include AWS Glue, Databricks, and data modeling. Experience with HR data and CI/CD processes is essential. Pay rate is inside IR35.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
July 12, 2025
π - Project duration
3 to 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Inside IR35
-
π - Security clearance
Unknown
-
π - Location detailed
Cambridge, England, United Kingdom
-
π§ - Skills detailed
#Databricks #Strategy #Data Quality #ML (Machine Learning) #Automated Testing #Spark (Apache Spark) #AWS Glue #Terraform #Documentation #Scala #Data Lake #PySpark #Data Strategy #UAT (User Acceptance Testing) #Data Engineering #Data Pipeline #Data Lakehouse #S3 (Amazon Simple Storage Service) #Cloud #Deployment #Physical Data Model #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #GIT
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Data Engineer β People Data
A multinational semiconductor and software design company seeks a Data Engineer for a 3-month contract initially to start ASAP, based in Cambridge (Hybrid- 1 day per week), Inside IR35
The Data Engineer will work with People Analytics to deliver an Enterprise Data Strategy
Accountabilities
Build strong working relationships and collaborate with analysts and data owners to design conceptual and physical data models that power HR/People reporting, dashboards, and machine learning use cases.
Design and implement data solutions within a modern data lakehouse environment using Databricks, leveraging medallion architecture principles (Bronze, Silver, Gold & Platinum layers) to structure and refine data progressively.
Develop scalable and efficient data pipelines using AWS Glue to ingest, transform, and curate data across various structured and semi-structured sources.
Validate and assure data quality by collaborating with stakeholders to test and verify business-critical metrics and model outputs.
Support CI/CD processes to ensure controlled deployment of data assets across development, staging, and production environments.
Produce clear, user-focused documentation for implemented data models, metrics, and transformation logic.
Skills, Capabilities, and Experience
Data Provisioning: Experience provisioning data from diverse source systems into cloud-based data platforms; familiarity with AWS Glue beneficial
Data Modelling: Strong capability in conceptual, logical, and physical data modelling, particularly within lakehouse architectures, experience working with HR/People data beneficial.
Data Engineering: Proficient in building robust and reusable pipelines using PySpark and Databricks notebooks; skilled in applying the medallion architecture for progressive data refinement and downstream usability.
Testing & Verification: Knowledge of data validation techniques and automated testing frameworks; experience in unit, integration, and UAT testing for data pipelines.
Stakeholder Collaboration: Demonstrated ability to work cross-functionally with technical and non-technical stakeholders including platform teams, business SMEs, and governance roles to deliver quality data products.
Technologies & Tools
Databricks
AWS Glue
Amazon S3
CI/CD Tools (e.g., Git, Terraform)