Brio Digital

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer based in Leeds, offering £60,000–£65,000 for a full-time, permanent position. Key skills include Python, PySpark, SQL, and AWS. Experience with data pipelines and cloud infrastructure is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
295
-
🗓️ - Date
March 4, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
Leeds, England, United Kingdom
-
🧠 - Skills detailed
#PySpark #Data Lifecycle #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Redshift #Cloud #SQL Queries #S3 (Amazon Simple Storage Service) #Deployment #Spark (Apache Spark) #SQL (Structured Query Language) #Python #Lambda (AWS Lambda) #Data Pipeline #AWS (Amazon Web Services) #Data Architecture #Scala #Data Quality #Data Engineering #Data Processing #Datasets #Agile
Role description
Data Engineer 📍 Leeds (2 days per week in office) 💰 £60,000–£65,000 🕒 Full-time, Permanent The Role We’re looking for a Data Engineer to join a growing data team and help design, build and optimise scalable data solutions. You’ll play a key role in developing robust data pipelines, improving data quality, and supporting analytics and reporting across the business. This is a hands-on role suited to someone who enjoys working across the full data lifecycle, from ingestion and transformation through to modelling and optimisation in the cloud. Key Responsibilities • Design, build and maintain scalable data pipelines using Python and PySpark • Develop and optimise SQL queries for analytics and reporting • Work with AWS services to build and manage cloud-based data infrastructure • Collaborate with analysts and stakeholders to translate business requirements into technical solutions • Improve data quality, reliability and performance across systems • Contribute to data architecture decisions and best practices • Support CI/CD processes and deployment of data solutions Tech Stack • Python • PySpark • SQL • AWS (e.g. S3, Glue, Redshift, Lambda, EMR or similar) About You • Solid experience as a Data Engineer or in a similar data-focused role • Strong Python skills with experience building production-grade pipelines • Hands-on experience with PySpark and distributed data processing • Strong SQL skills and experience working with large datasets • Experience building and maintaining data solutions in AWS • Good understanding of data modelling and ETL/ELT principles • Comfortable working in a collaborative, fast-paced environment Nice to Have • Experience with Glue workflows • Infrastructure as Code experience • Exposure to data warehousing solutions • Experience working in Agile teams Apply now or email dom@briodigital.io for more information.