Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with an initial 6-month contract, competitive pay, and a hybrid location in Cardiff, UK. Key skills include ETL testing, SQL, Python, and cloud experience (preferably GCP). Immediate start required.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 13, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Inside IR35
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Cardiff, Wales, United Kingdom
-
🧠 - Skills detailed
#Cloud #Azure #Oracle #Data Quality #GitLab #SQL (Structured Query Language) #Data Pipeline #Azure DevOps #PySpark #Data Engineering #BigQuery #Jenkins #DevOps #Spark (Apache Spark) #Python #Automation #Data Warehouse #Agile #GCP (Google Cloud Platform) #Documentation #Dataflow #"ETL (Extract #Transform #Load)" #Jira #JMeter #Databases
Role description
πŸ“ Location: Hybrid – Cardiff, UK (2-3 days on-site per week) πŸ“… Duration: Initial 6 months (extensions likely) πŸ’Έ Rate: Competitive / Market Rates (Inside IR35) πŸš€ Start Date: ASAP πŸ–₯️ Engagement: Contract (Inside IR35) We’re looking for a Data Quality Engineer to join a dynamic data platform team supporting cloud-native data transformation projects. This role is ideal for engineers passionate about data quality, automation, and ETL testing in cloud environments. βœ… Key Responsibilities: β€’ Define, build, and maintain test automation frameworks and tools β€’ Collaborate with developers to shift-left testing and integrate into CI/CD pipelines β€’ Own and manage the QA process across sprints and releases β€’ Perform ETL, DWH, and data pipeline testing using tools like SQL, Python, PySpark β€’ Maintain test automation aligned with BDD (Cucumber or similar) β€’ Drive best practices, documentation, and quality strategies across the QA function β€’ Support continuous integration using Azure DevOps, Jenkins, or GitLab β€’ Liaise with internal teams and stakeholders to ensure robust delivery 🧠 Ideal Candidate Profile: β€’ Strong hands-on experience in ETL/Data Warehouse Testing using Python or PySpark β€’ Expertise with SQL, relational databases (preferably Oracle) β€’ Experience with cloud platforms, ideally GCP (BigQuery, Dataflow, Cloud Functions) β€’ Proven success implementing automation frameworks and integrating them with pipelines β€’ Familiarity with CI/CD, BDD (Cucumber), and performance testing (JMeter) β€’ Experience working with Agile, JIRA, and test management tools like Zephyr β€’ Excellent communication and stakeholder engagement skills β€’ Experience mentoring QA teams and improving QA maturity