

Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 3-month contract, with a negotiable day rate. Key skills include AWS, Azure, Apache Spark, SQL, and experience with public sector clients. Applicants must have 5 years of UK address history.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
May 15, 2025
🕒 - Project duration
3 to 6 months
🏝️ - Location type
Unknown
📄 - Contract type
Outside IR35
🔒 - Security clearance
Unknown
📍 - Location detailed
England, United Kingdom
🧠 - Skills detailed
#Data Quality #Data Modeling #Data Engineering #Security #DevOps #Lambda (AWS Lambda) #Scala #Consulting #Data Pipeline #IAM (Identity and Access Management) #Java #Spark (Apache Spark) #Data Processing #AWS DMS (AWS Database Migration Service) #Cloud #Compliance #GIT #SQL (Structured Query Language) #Scrum #S3 (Amazon Simple Storage Service) #Data Governance #Azure #Agile #AWS (Amazon Web Services) #Terraform #dbt (data build tool) #DMS (Data Migration Service) #Oracle #Apache Spark #AWS Glue #Database Modelling #Datasets #Python #"ETL (Extract #Transform #Load)" #Data Warehouse #Big Data #Version Control #Snowflake #Amazon Redshift #Consul #Redshift #Data Science #Batch
Role description
We are a technology solutions provider that specialises in serving the public sector.
Our mission is to help government organisations become data-driven, transform their digital offerings, streamline their processes, improve citizen services and enhance transparency.
We are seeking a highly skilled and experienced Senior Data Engineer to join our team working with our Central Government client on a contract basis. This role requires a deep understanding of data engineering best practices, strong hands-on experience with AWS, Azure, Apache Spark, data warehousing, database modelling and SQL. You’ll play a critical role in designing, building, and maintaining our data infrastructure to support scalable, high-performance data pipelines and analytics platforms.
Responsibilities will include:
• Design, build, and maintain robust, scalable, and secure data pipelines using AWS services and Apache Spark.
• Develop and optimize data models for reporting and analytics in Redshift and other DWH platforms.
• Collaborate with Data Scientists, Analysts, and Business Stakeholders to understand data requirements and deliver clean, validated datasets.
• Monitor, troubleshoot, and optimize ETL/ELT workflows to ensure data quality and pipeline efficiency.
• Implement best practices in data governance, security, and compliance within cloud environments.
• Lead and mentor junior data engineers, promoting a culture of technical excellence and continuous improvement.
• Write clean, maintainable, and efficient code while following best practices in software development.
• Debug and resolve issues and implement solutions to improve the performance and functionality of existing applications.
• Stay up to date with the latest in cloud and big data technologies, evaluating their potential for use in the platform.
• Collaborate with DevOps and Platform teams to deploy and maintain data pipelines and services.
• Stay up-to-date with industry trends, best practices, and emerging technologies to enhance the development process and maintain high coding standards.
• Document code, systems, and processes to facilitate knowledge sharing and maintainability.
Requirements
• Proven work experience as a Senior Data Engineer using cloud platform technologies, alongside experience with a variety of database technologies including Oracle, Postgres and MSSQLServer;
• Strong expertise in AWS services including AWS DMS, S3, Lambda, Glue, EMR, Redshift, and IAM.
• Proficient in Apache Spark (batch and/or streaming) and big data processing.
• Solid experience with SQL and performance tuning in data warehouse environments.
• Hands-on experience with Amazon Redshift or equivalent, including table design, workload management, and implementing Redshift Spectrum.
• Experience building ETL/ELT pipelines using tools like AWS Glue, EMR, or custom frameworks.
• Familiarity with data modeling concepts.
• Excellent problem-solving and communication skills.
• Proficiency in Java and data pipeline development.
• Familiarity with version control systems (e.g., Git) and agile development methodologies.
• Experience working with Public Sector clients
• Consulting experience
Preferred Knowledge/Experience
• Experience with CI/CD and Infrastructure-as-Code (e.g., Terraform, CloudFormation).
• Familiarity with modern data stack components (e.g., dbt, Snowflake, Airbyte).
• Experience working in an Agile/Scrum environment.
• Knowledge of Python or Java/Scala for data engineering.
• Experience with version control systems (eg Git, CVS).
Applicants will be required to obtain SC and therefore 5 year's UK address history is essential.
IR35 status: Outside
Start: ASAP
Duration: 3 months (initially)
Day rate: negotiable
We are a technology solutions provider that specialises in serving the public sector.
Our mission is to help government organisations become data-driven, transform their digital offerings, streamline their processes, improve citizen services and enhance transparency.
We are seeking a highly skilled and experienced Senior Data Engineer to join our team working with our Central Government client on a contract basis. This role requires a deep understanding of data engineering best practices, strong hands-on experience with AWS, Azure, Apache Spark, data warehousing, database modelling and SQL. You’ll play a critical role in designing, building, and maintaining our data infrastructure to support scalable, high-performance data pipelines and analytics platforms.
Responsibilities will include:
• Design, build, and maintain robust, scalable, and secure data pipelines using AWS services and Apache Spark.
• Develop and optimize data models for reporting and analytics in Redshift and other DWH platforms.
• Collaborate with Data Scientists, Analysts, and Business Stakeholders to understand data requirements and deliver clean, validated datasets.
• Monitor, troubleshoot, and optimize ETL/ELT workflows to ensure data quality and pipeline efficiency.
• Implement best practices in data governance, security, and compliance within cloud environments.
• Lead and mentor junior data engineers, promoting a culture of technical excellence and continuous improvement.
• Write clean, maintainable, and efficient code while following best practices in software development.
• Debug and resolve issues and implement solutions to improve the performance and functionality of existing applications.
• Stay up to date with the latest in cloud and big data technologies, evaluating their potential for use in the platform.
• Collaborate with DevOps and Platform teams to deploy and maintain data pipelines and services.
• Stay up-to-date with industry trends, best practices, and emerging technologies to enhance the development process and maintain high coding standards.
• Document code, systems, and processes to facilitate knowledge sharing and maintainability.
Requirements
• Proven work experience as a Senior Data Engineer using cloud platform technologies, alongside experience with a variety of database technologies including Oracle, Postgres and MSSQLServer;
• Strong expertise in AWS services including AWS DMS, S3, Lambda, Glue, EMR, Redshift, and IAM.
• Proficient in Apache Spark (batch and/or streaming) and big data processing.
• Solid experience with SQL and performance tuning in data warehouse environments.
• Hands-on experience with Amazon Redshift or equivalent, including table design, workload management, and implementing Redshift Spectrum.
• Experience building ETL/ELT pipelines using tools like AWS Glue, EMR, or custom frameworks.
• Familiarity with data modeling concepts.
• Excellent problem-solving and communication skills.
• Proficiency in Java and data pipeline development.
• Familiarity with version control systems (e.g., Git) and agile development methodologies.
• Experience working with Public Sector clients
• Consulting experience
Preferred Knowledge/Experience
• Experience with CI/CD and Infrastructure-as-Code (e.g., Terraform, CloudFormation).
• Familiarity with modern data stack components (e.g., dbt, Snowflake, Airbyte).
• Experience working in an Agile/Scrum environment.
• Knowledge of Python or Java/Scala for data engineering.
• Experience with version control systems (eg Git, CVS).
Applicants will be required to obtain SC and therefore 5 year's UK address history is essential.
IR35 status: Outside
Start: ASAP
Duration: 3 months (initially)
Day rate: negotiable