

Queen Square Recruitment
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with strong ETL/ELT testing expertise, offering a £460/day contract for 6-12 months, hybrid in Stevenage. Essential skills include advanced SQL, Python, API testing, and CI/CD experience with GitHub Actions.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
460
-
🗓️ - Date
May 16, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Stevenage, England, United Kingdom
-
🧠 - Skills detailed
#Data Lifecycle #Cloud #GitHub #Data Quality #SQL (Structured Query Language) #Scala #Data Pipeline #Airflow #Pandas #API (Application Programming Interface) #Databricks #Data Engineering #Strategy #Logging #ADF (Azure Data Factory) #Automation #XML (eXtensible Markup Language) #Debugging #Pytest #Data Integrity #POSTMAN #Python #"ETL (Extract #Transform #Load)"
Role description
Data Engineer
Location: Stevenage - Hybrid – 3 days per week onsite
Start Day: ASAP
Contract Rate: £460 per day inside IR35
Duration: 6 to 12 months initially
Role Overview
Our client is seeking a Data Engineer with strong data quality and testing expertise to lead the design and implementation of QA frameworks for business-critical ETL/ELT pipelines. This is a greenfield opportunity to define testing strategy and build scalable, automation-first solutions in a fast-paced, data-driven environment. You’ll work across the full data lifecycle, ensuring accuracy and reliability of outputs including APIs, Excel reports, and XML files. Leveraging tools such as Databricks, Python (Pytest, Pandas), and GitHub Actions, you will establish best practices, improve data trust, and enable high-quality data delivery.
Key Responsibilities
• Own and define end-to-end testing strategy for data pipelines
• Build scalable automation frameworks using Python and Pytest
• Develop SQL-based validation for reconciliation and data integrity
• Automate Excel validation using Pandas and OpenPyXL
• Validate XML outputs using lxml and xmlschema
• Deliver API test automation using Postman/Newman
• Embed testing into CI/CD pipelines with GitHub Actions
• Create clear logging and reporting for fast issue resolution
• Collaborate with data engineers to improve pipeline quality
• Mentor team members and establish QA best practices
Skills & Experience
Essential
• Strong experience testing ETL/ELT data pipelines
• Advanced SQL for validation and debugging
• Expertise in Python automation (Pytest frameworks)
• Experience with Pandas and OpenPyXL for Excel validation
• XML validation experience (lxml, xmlschema, XSD)
• API testing with Postman/Newman
• CI/CD experience using GitHub Actions
• Familiarity with Databricks or similar platforms
• Strong understanding of data quality and validation principles
• Experience building QA processes and mentoring others
Desirable
• Exposure to cloud-based data platforms
• Experience with tools like Great Expectations
• Knowledge of Airflow or ADF
• Large-scale data testing experience
• End-to-end validation across pipelines, APIs, and files
• Experience scaling QA practices in data teams
If you have the relevant skills and experience, please do apply promptly to be considered
Data Engineer
Location: Stevenage - Hybrid – 3 days per week onsite
Start Day: ASAP
Contract Rate: £460 per day inside IR35
Duration: 6 to 12 months initially
Role Overview
Our client is seeking a Data Engineer with strong data quality and testing expertise to lead the design and implementation of QA frameworks for business-critical ETL/ELT pipelines. This is a greenfield opportunity to define testing strategy and build scalable, automation-first solutions in a fast-paced, data-driven environment. You’ll work across the full data lifecycle, ensuring accuracy and reliability of outputs including APIs, Excel reports, and XML files. Leveraging tools such as Databricks, Python (Pytest, Pandas), and GitHub Actions, you will establish best practices, improve data trust, and enable high-quality data delivery.
Key Responsibilities
• Own and define end-to-end testing strategy for data pipelines
• Build scalable automation frameworks using Python and Pytest
• Develop SQL-based validation for reconciliation and data integrity
• Automate Excel validation using Pandas and OpenPyXL
• Validate XML outputs using lxml and xmlschema
• Deliver API test automation using Postman/Newman
• Embed testing into CI/CD pipelines with GitHub Actions
• Create clear logging and reporting for fast issue resolution
• Collaborate with data engineers to improve pipeline quality
• Mentor team members and establish QA best practices
Skills & Experience
Essential
• Strong experience testing ETL/ELT data pipelines
• Advanced SQL for validation and debugging
• Expertise in Python automation (Pytest frameworks)
• Experience with Pandas and OpenPyXL for Excel validation
• XML validation experience (lxml, xmlschema, XSD)
• API testing with Postman/Newman
• CI/CD experience using GitHub Actions
• Familiarity with Databricks or similar platforms
• Strong understanding of data quality and validation principles
• Experience building QA processes and mentoring others
Desirable
• Exposure to cloud-based data platforms
• Experience with tools like Great Expectations
• Knowledge of Airflow or ADF
• Large-scale data testing experience
• End-to-end validation across pipelines, APIs, and files
• Experience scaling QA practices in data teams
If you have the relevant skills and experience, please do apply promptly to be considered






