eTeam

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on an 8-month contract, located in "Newport, Lichfield, Manchester, Darlington, or potentially London." Key skills required include Python, PySpark, SQL, and ETL development, with experience in public sector environments preferred.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 28, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Programming #Data Processing #Azure #Data Governance #GIT #Automation #Data Engineering #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #PySpark #Data Quality #Documentation #Version Control #Cloud #Scala #BI (Business Intelligence) #AWS (Amazon Web Services) #SQL (Structured Query Language) #Agile #Data Pipeline #Python
Role description
Data Engineer (8-Month Contract) πŸ“Newport, Lichfield, Manchester, Darlington, and potentially London πŸ“… Contract: 8 Months We are seeking a Data Engineer to join our team on an 8-month contract. This role offers the opportunity to design, build, and optimise scalable data pipelines that support operational systems, analytics, and Business Intelligence (BI) platforms. You will play a key role in transforming manual data processes into automated, reliable, and repeatable workflows while ensuring high standards of quality, testing, and documentation. Key Responsibilities β€’ Develop and integrate data products and services into operational systems and business processes β€’ Design and implement scalable data flows connecting operational systems to analytics and BI platforms β€’ Re-engineer manual data processes to enable automation and scaling β€’ Design, develop, and optimise ETL pipelines for performance and reliability β€’ Write and maintain PySpark, Python, and SQL code aligned to design documentation β€’ Develop unit and integration tests to ensure data quality and pipeline robustness β€’ Manage code using Git and follow version control best practices β€’ Maintain documentation for data processing pipelines and statistical functions β€’ Collaborate with cross-functional teams to troubleshoot and improve solutions β€’ Report progress and contribute to team planning and delivery Required Skills & Experience β€’ Experience in data engineering, ETL development, or data pipeline implementation β€’ Strong programming skills in Python, PySpark, and SQL β€’ Understanding of data modelling and pipeline architecture β€’ Experience writing and executing tests (unit/integration) β€’ Version control experience using Git β€’ Ability to work collaboratively in agile or delivery-focused environments β€’ Strong communication skills and ability to deliver at pace Desirable β€’ Experience working in large public sector or enterprise environments β€’ Exposure to cloud platforms (e.g., Azure/AWS) β€’ Understanding of data governance and data quality principle