Whitehall Resources

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 6-month contract, paying "rate". It requires on-site work in Cambridge 2 days per week. Key skills include Databricks, SQL, Python, and ETL pipeline development. Minimum 4 years of relevant experience is required.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 29, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Inside IR35
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Cambridgeshire, England, United Kingdom
-
🧠 - Skills detailed
#Deployment #Databricks #Data Lineage #Jira #Version Control #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Dynatrace #Documentation #Data Pipeline #GIT #Scala #Data Engineering #Microsoft Power BI #Agile #Schema Design #Python #BI (Business Intelligence) #Data Quality #Data Governance #Automation
Role description
Data Engineer Whitehall Resources are currently looking for a Data Engineer for an initial 6 month contract. β€’ β€’ β€’ INSIDE IR35 β€’ β€’ β€’ β€’ This role requires on site work in Cambridge 2 days per week. Job Spec: Our client are seeking a motivated and detail-oriented Data Engineer with a passion for designing and delivering high-quality data solutions in Databricks. You will be responsible for building and optimising data pipelines that form the foundation of our reporting and analytics ecosystem. Using your technical expertise, you will design and maintain efficient ETL processes to integrate data from systems such as ServiceNow, JIRA, and Dynatrace, ensuring accuracy, performance, and scalability. β€’ Main Responsibilities:Design, build, and maintain scalable ETL pipelines in Databricks to integrate data from multiple business systems such as ServiceNow, JIRA, and ADO. β€’ Optimise data workflows for performance, scalability, and reliability. β€’ Implement data validation and quality checks to ensure trustworthy reporting in downstream tools such as Power BI. β€’ Create data modelling and schema design at backend for analytics and reporting. β€’ Collaborate with the Staff Data Engineer and Visualisation Developer to align technical delivery with reporting needs. β€’ Manage code in Git and support CI/CD processes for Databricks deployments. β€’ Contribute to data lineage documentation, standards, and governance best practices. β€’ Key Skills and Experience:Proven experience (minimum 4 years) in building and maintaining ETL pipelines using Databricks. β€’ Strong knowledge of SQL and Python for data transformation and automation. β€’ Proven experience in data modelling, schema design, and data quality validation. β€’ Solid understanding of data performance tuning and pipeline optimisation in Databricks. β€’ Experience working with Git-based version control and collaborative development workflows. β€’ Strong analytical and problem-solving skills, with an eye for efficiency and accuracy. β€’ Excellent communication and collaboration skills, comfortable working with both technical and non-technical partners. β€’ Desirable Skills:Familiarity with Agile delivery methods and iterative development practices β€’ Knowledge of data governance and data lineage documentation standards. β€’ Exposure to automation and CI/CD frameworks within Databricks.