Pontoon Solutions

Data Pipeline Engineer (On Prem to AWS Migration)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Pipeline Engineer focused on AWS migration in the Utilities sector, offering a 6+ month contract at £700 - £750 per day, primarily remote with occasional Bristol travel. Key skills include AWS, Redshift, SQL, and Python.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
750
-
🗓️ - Date
November 21, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Greater Bristol Area, United Kingdom
-
🧠 - Skills detailed
#Data Engineering #Python #Deployment #GCP (Google Cloud Platform) #"ETL (Extract #Transform #Load)" #Apache Spark #Distributed Computing #REST (Representational State Transfer) #Redshift #Airflow #AWS (Amazon Web Services) #Data Governance #Data Processing #Cloud #Agile #Terraform #GIT #Azure #GDPR (General Data Protection Regulation) #Kafka (Apache Kafka) #Migration #Security #Spark (Apache Spark) #Strategy #AI (Artificial Intelligence) #Version Control #Compliance #SQL (Structured Query Language) #Scala #DevOps #Automation #Schema Design #Data Manipulation #Data Science #Infrastructure as Code (IaC) #AWS Migration #Data Pipeline
Role description
Data Pipeline Engineer (On prem to AWS migration) Utilities Predominantly remote: occasional travel to Bristol 6 months+ £700 - £750 per day In short: We require 3 Data Engineers to join a large on prem to AWS migration programme in replicating data pipelines in readiness for a lift and shift to the cloud. Essential: AWS and Redshift In full: Role purpose Reporting to the Lead Data Engineer, the Data Engineer is responsible for designing and maintaining scalable data pipelines, ensuring data availability, quality, and performance to support analytics and operational decision-making. They contribute to the data engineering roadmap through research, technical vision, business alignment, and prioritisation. This role is also expected to be a subject matter expert in data engineering, fostering collaboration, continuous improvement, and adaptability within the organisation. They act as a mentor and enabler of best practices, advocating for automation, modularity, and an agile approach to data engineering. A person who advocates for a culture of agility, encouraging all to embrace Agile values and practices. The role also has accountability to deputise for their line manager (whenever necessary) and is expected to support the product owner community while also driving a positive culture (primarily through role modelling) across the Technology department and wider business. Key accountabilities • Design, develop, and maintain scalable, secure, and efficient data pipelines, ensuring data is accessible, high-quality, and optimised for analytical and operational use. • Contribute to the data engineering roadmap, ensuring solutions align with business priorities, technical strategy, and long-term sustainability. • Optimise data workflows and infrastructure, leveraging automation and best practices to improve performance, cost efficiency, and scalability. • Collaborate with Data Science, Insight, and Governance teams to support data-driven decision-making, ensuring seamless integration of data across the organisation. • Implement and uphold data governance, security, and compliance standards, ensuring adherence to regulatory and organisational best practices. • Identify and mitigate risks, issues, and dependencies, ensuring continuous improvement in data engineering processes and system reliability, in line with the company risk framework. • Ensure quality is maintained throughout the data engineering lifecycle, delivering robust, scalable, and cost-effective solutions on time and within budget. • Monitor and drive data pipeline lifecycle, from design and implementation through to optimisation and post-deployment performance analysis. • Support operational resilience, ensuring data solutions are maintainable, supportable, and aligned with architectural principles. • Engage with third-party vendors where required, ensuring external contributions align with technical requirements, project timelines, and quality expectations. • Continuously assess emerging technologies and industry trends, identifying opportunities to enhance data engineering capabilities. • Document and maintain clear technical processes, facilitating knowledge sharing and operational continuity within the data engineering function. Knowledge and experience required • Excellent communication skills, with the ability to convey technical concepts to both technical and non-technical audiences. • Experience working with large-scale data processing and distributed computing frameworks. • Knowledge of cloud platforms such as AWS, Azure, or GCP, with hands-on experience in cloud-based data services. • Proficiency in SQL and Python for data manipulation and transformation. • Experience with modern data engineering tools, including Apache Spark, Kafka, and Airflow. • Strong understanding of data modelling, schema design, and data warehousing concepts. • Familiarity with data governance, privacy, and compliance frameworks (e.g., GDPR, ISO27001). • Hands-on experience with version control systems (e.g., Git) and infrastructure as code (e.g., Terraform, CloudFormation). • Understanding of Agile methodologies and DevOps practices for data engineering Please be advised if you haven't heard from us within 48 hours then unfortunately your application has not been successful on this occasion, we may however keep your details on file for any suitable future vacancies and contact you accordingly. Pontoon is an employment consultancy and operates as an equal opportunities employer. We use generative AI tools to support our candidate screening process. This helps us ensure a fair, consistent, and efficient experience for all applicants. Rest assured, all final decisions are made by our hiring team, and your application will be reviewed with care and attention.