

Digital Skills ltd
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 6-month contract in Manchester, offering up to £90 per hour. Requires 5+ years of experience in big data pipelines, strong skills in Kafka, Hadoop, Spark, Python, DBT, and Snowflake.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
720
-
🗓️ - Date
March 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Manchester Area, United Kingdom
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Datasets #Data Governance #Security #Data Quality #Scala #Data Security #Compliance #AWS (Amazon Web Services) #Snowflake #Azure #EDW (Enterprise Data Warehouse) #Hadoop #Spark (Apache Spark) #Data Pipeline #Agile #Data Warehouse #Data Engineering #Monitoring #Big Data #Python #ML (Machine Learning) #SQL Server #Vault #dbt (data build tool) #MySQL #Cloud #Data Vault #Observability #Kafka (Apache Kafka)
Role description
Data Engineer
6-Month Contract
Manchester - Initially 2-3 days per week in the office, reducing to 1-2 days per month after the first three weeks.
Up to £90 per hour - Inside IR35 (based on a 37.5-hour working week)
A global technology business is seeking an experienced Data Engineer to join a high-performing data team responsible for delivering a scalable, secure, and fully governed data platform. This is an exciting opportunity to work on large-scale systems and help modernise and optimise enterprise data solutions.
The Role
As a Data Engineer, you will play a key role in designing, building, and maintaining high-performance data pipelines and platforms.
You will contribute to replacing Legacy and ad-hoc solutions with modern, scalable architecture that enables high-quality data production and advanced analytics.
Senior-level engineers will also be expected to mentor others, provide architectural guidance, and promote engineering excellence across the team.
Key Responsibilities
Data Engineering & Platform Delivery
• Design, build, and maintain scalable, secure, and well-governed data pipelines
• Embed data governance, lineage, retention, monitoring, and alerting into pipelines
• Ensure high data quality across core datasets with end-to-end ownership
• Maintain data security, integrity, and compliance in line with best practices
• Contribute to architectural principles, non-functional requirements, and quality standards
• Write high-quality, well-tested code following CI/CD and Agile practices
Required Skills & Experience
• 5+ years' experience building big data pipelines in distributed environments
• Strong experience with Kafka, Hadoop, Spark and/or Python
• Strong experience with DBT and Snowflake
• Experience with Data Vault and dimensional data modelling
• Strong SQL skills and experience with enterprise data warehouse environments
• Experience embedding governance, monitoring, lineage, and security into data pipelines
• Experience working on large-scale, well-governed, compliant systems
• Solid understanding of CI/CD and Agile methodologies
• Strong knowledge of cloud platforms (AWS preferred; Azure experience beneficial)
• Understanding of cloud security best practices
• Exposure to observability tooling, MySQL, or SQL Server stack beneficial
• Good understanding of analytics and machine learning fundamentals
• Excellent written and verbal communication skills
Data Engineer
6-Month Contract
Manchester - Initially 2-3 days per week in the office, reducing to 1-2 days per month after the first three weeks.
Up to £90 per hour - Inside IR35 (based on a 37.5-hour working week)
A global technology business is seeking an experienced Data Engineer to join a high-performing data team responsible for delivering a scalable, secure, and fully governed data platform. This is an exciting opportunity to work on large-scale systems and help modernise and optimise enterprise data solutions.
The Role
As a Data Engineer, you will play a key role in designing, building, and maintaining high-performance data pipelines and platforms.
You will contribute to replacing Legacy and ad-hoc solutions with modern, scalable architecture that enables high-quality data production and advanced analytics.
Senior-level engineers will also be expected to mentor others, provide architectural guidance, and promote engineering excellence across the team.
Key Responsibilities
Data Engineering & Platform Delivery
• Design, build, and maintain scalable, secure, and well-governed data pipelines
• Embed data governance, lineage, retention, monitoring, and alerting into pipelines
• Ensure high data quality across core datasets with end-to-end ownership
• Maintain data security, integrity, and compliance in line with best practices
• Contribute to architectural principles, non-functional requirements, and quality standards
• Write high-quality, well-tested code following CI/CD and Agile practices
Required Skills & Experience
• 5+ years' experience building big data pipelines in distributed environments
• Strong experience with Kafka, Hadoop, Spark and/or Python
• Strong experience with DBT and Snowflake
• Experience with Data Vault and dimensional data modelling
• Strong SQL skills and experience with enterprise data warehouse environments
• Experience embedding governance, monitoring, lineage, and security into data pipelines
• Experience working on large-scale, well-governed, compliant systems
• Solid understanding of CI/CD and Agile methodologies
• Strong knowledge of cloud platforms (AWS preferred; Azure experience beneficial)
• Understanding of cloud security best practices
• Exposure to observability tooling, MySQL, or SQL Server stack beneficial
• Good understanding of analytics and machine learning fundamentals
• Excellent written and verbal communication skills






