Signify Technology

QA Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a QA Data Engineer on a 3-month remote contract, focusing on data validation for a cloud migration project. Key skills include SQL, dbt, Airflow, and automation. Strong data quality practices and experience with ETL are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 20, 2025
πŸ•’ - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Data Warehouse #Documentation #Cloud #dbt (data build tool) #Migration #"ETL (Extract #Transform #Load)" #Data Engineering #Data Accuracy #Data Quality #Automation #Python #SQL (Structured Query Language) #Airflow
Role description
CONTRACT ROLE USA remote- 3 months Project Overview A large enterprise is in the middle of a major modernization initiative, moving from a legacy data warehouse and ETL environment to a modern cloud-based platform leveraging dbt, SQL pipelines, and Airflow. Hundreds of models and tables will be transitioned within the next few months, and the engineering team needs additional hands to ensure the new environment is correct, consistent, and ready for production use. This role is focused entirely on data validation, reconciliation, certification, and delivery assuranceβ€”ensuring the new system behaves exactly as expected before go-live. Role Summary You will join a dedicated migration team responsible for validating that newly developed data models are accurate reproductions of their legacy counterparts. The focus is on delivering fast, reliable validation coverageβ€”without sacrificing confidence in production data. This engagement requires strong SQL skills, deep understanding of data quality best practices, and the ability to automate validation quickly and pragmatically. Key Responsibilities β€’ Develop and execute a repeatable data validation framework that includes: β€’ Table-level row and record count checks β€’ Aggregate and metric comparisons β€’ Key field and column-level matching β€’ Targeted record sampling and side-by-side diffs β€’ Write and run SQL test cases that confirm data accuracy, completeness, and fidelity. β€’ Build lightweight automation using tools such as: β€’ dbt tests β€’ SQL scripts β€’ Python notebooks β€’ Data-diff utilities β€’ Collaborate closely with engineers to: β€’ Understand legacy transformation logic β€’ Communicate discrepancies quickly β€’ Align on remediation timelines β€’ Maintain structured reporting on: β€’ Validation progress β€’ Defects and issue owners β€’ Turnaround and release readiness β€’ Produce clear documentation that helps downstream analysts trust the new data environment.