

Linnk Group
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month contract in London, offering a hybrid work model. Key skills include AWS expertise, Python, SQL, and data architecture. Requires 6+ years in Data Engineering and experience with CI/CD tools.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
October 29, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Fixed Term
-
π - Security
Unknown
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Monitoring #SQL (Structured Query Language) #Aurora #Data Processing #PySpark #Data Engineering #Lambda (AWS Lambda) #Project Management #Spatial Data #Spark (Apache Spark) #Data Management #Cloud #Jira #Redshift #Snowflake #Infrastructure as Code (IaC) #Apache Airflow #Databases #Python #GIT #GitLab #Data Architecture #Security #Data Science #Agile #AI (Artificial Intelligence) #Airflow #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #SageMaker
Role description
Employment Type: Contract
Contract Length: 6 months (extendable)
Location: London, United Kingdom
Work Model: Hybrid
About the Data Platform:
β’ The Data Platform will be built and managed βas a Productβ to support a Data Mesh organization.
β’ The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains.
β’ The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally.
What does a Data Infrastructure Engineer do?
β’ A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the business.
β’ The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment.
β’ You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform.
β’ You will be able to design architectures and create re-useable solutions to reflect the business needs.
Responsibilities will include:
β’ Collaborating across departments to develop and maintain the data platform
β’ Building infrastructure and data architectures in Cloud Formation, and SAM.
β’ Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake
β’ Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow
β’ Monitoring and reporting on the data platform performance, usage and security
β’ Designing and applying security and access control architectures to secure sensitive data
You will have:
β’ 6+ years of experience in a Data Engineering role.
β’ Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift.
β’ Experience administrating databases and data platforms
β’ Strong proficiency in Cloud Formation, Python and SQL
β’ Knowledge and experience of relational databases such as Postgres, Redshift
β’ Experience using Git for code versioning, and lifecycle management
β’ Experience operating to Agile principles and ceremonies
β’ Hands-on experience with CI/CD tools such as GitLab
β’ A keen eye for detail, and a passion for accuracy and correctness in numbers
Whilst not essential, the following skills would also be useful:
β’ Experience using Jira, or other agile project management and issue tracking software
β’ Experience with Snowflake
β’ Experience with Spatial Data Processing
Employment Type: Contract
Contract Length: 6 months (extendable)
Location: London, United Kingdom
Work Model: Hybrid
About the Data Platform:
β’ The Data Platform will be built and managed βas a Productβ to support a Data Mesh organization.
β’ The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains.
β’ The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally.
What does a Data Infrastructure Engineer do?
β’ A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the business.
β’ The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment.
β’ You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform.
β’ You will be able to design architectures and create re-useable solutions to reflect the business needs.
Responsibilities will include:
β’ Collaborating across departments to develop and maintain the data platform
β’ Building infrastructure and data architectures in Cloud Formation, and SAM.
β’ Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake
β’ Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow
β’ Monitoring and reporting on the data platform performance, usage and security
β’ Designing and applying security and access control architectures to secure sensitive data
You will have:
β’ 6+ years of experience in a Data Engineering role.
β’ Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift.
β’ Experience administrating databases and data platforms
β’ Strong proficiency in Cloud Formation, Python and SQL
β’ Knowledge and experience of relational databases such as Postgres, Redshift
β’ Experience using Git for code versioning, and lifecycle management
β’ Experience operating to Agile principles and ceremonies
β’ Hands-on experience with CI/CD tools such as GitLab
β’ A keen eye for detail, and a passion for accuracy and correctness in numbers
Whilst not essential, the following skills would also be useful:
β’ Experience using Jira, or other agile project management and issue tracking software
β’ Experience with Snowflake
β’ Experience with Spatial Data Processing






