

Motion Recruitment
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Data Quality) on a contract-to-hire basis, 100% remote (EST/CST). Key skills include AWS, Redshift, PySpark, and data validation techniques. Strong experience in Data QA/ETL Testing and Python automation is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Quality #AI (Artificial Intelligence) #Data Lake #Agile #BI (Business Intelligence) #Anomaly Detection #dbt (data build tool) #Redshift #Automated Testing #Apache Airflow #Airflow #DynamoDB #S3 (Amazon Simple Storage Service) #Data Engineering #"ETL (Extract #Transform #Load)" #Data Pipeline #DMS (Data Migration Service) #Documentation #SQL (Structured Query Language) #Spark (Apache Spark) #DevOps #Python #AWS (Amazon Web Services) #Scrum #Data Ingestion #PySpark #Automation #Cloud #Lambda (AWS Lambda)
Role description
We are seeking a highly skilled Senior Data Engineer (Data Quality) to support a modern AWS-based data platform. This role focuses on ensuring data quality, reliability, and performance across end-to-end data pipelines.
You will work with a robust cloud ecosystem including AWS, Redshift, PySpark, Airflow, and dbt, driving test automation and validation across ingestion, transformation, and reporting layers.
Job Title: Senior Data Engineer (Data Quality Engineer)
Location: 100% Remote (EST/CST only)
Employment Type: Contract-to-Hire
🚀 Key Responsibilities
• Define and implement end-to-end testing strategies for data platforms
• Design and build automated testing frameworks for data validation
• Validate data ingestion, transformation logic, and pipeline reliability
• Develop and maintain test scripts for AWS Redshift Serverless
• Perform testing across:
• AWS Redshift, Glue, DMS
• EventBridge, Data Lakes
• PySpark (Deequ)
• Apache Airflow, dbt
• Conduct data validation (schema checks, reconciliation, anomaly detection)
• Analyze test coverage and ensure adequate validation across workflows
• Prepare and manage test data sets
• Review solution architecture, data models, and technical documentation
• Collaborate with cross-functional teams (Data Engineering, BI, DevOps, Product)
🧠 Required Qualifications
• Strong experience in Data QA / ETL Testing / Data Engineering validation
• Hands-on experience with:
• AWS (Redshift, Glue, DMS, S3, Lambda, Step Functions, DynamoDB)
• SQL and data warehousing concepts
• Python for automation
• Apache Airflow and dbt
• Experience with data validation techniques (schema validation, reconciliation, anomaly detection)
• Familiarity with data quality tools like PySpark Deequ or similar
⭐ Preferred Qualifications
• Experience designing enterprise-level testing strategies for data platforms
• Knowledge of CI/CD pipelines for data and testing automation
• Experience with BI testing (e.g., QuickSight or similar tools)
• Familiarity with CloudFormation/CDK
• Experience working in Agile/Scrum environments
• Exposure to Gen AI or Node.js is a plus
We are seeking a highly skilled Senior Data Engineer (Data Quality) to support a modern AWS-based data platform. This role focuses on ensuring data quality, reliability, and performance across end-to-end data pipelines.
You will work with a robust cloud ecosystem including AWS, Redshift, PySpark, Airflow, and dbt, driving test automation and validation across ingestion, transformation, and reporting layers.
Job Title: Senior Data Engineer (Data Quality Engineer)
Location: 100% Remote (EST/CST only)
Employment Type: Contract-to-Hire
🚀 Key Responsibilities
• Define and implement end-to-end testing strategies for data platforms
• Design and build automated testing frameworks for data validation
• Validate data ingestion, transformation logic, and pipeline reliability
• Develop and maintain test scripts for AWS Redshift Serverless
• Perform testing across:
• AWS Redshift, Glue, DMS
• EventBridge, Data Lakes
• PySpark (Deequ)
• Apache Airflow, dbt
• Conduct data validation (schema checks, reconciliation, anomaly detection)
• Analyze test coverage and ensure adequate validation across workflows
• Prepare and manage test data sets
• Review solution architecture, data models, and technical documentation
• Collaborate with cross-functional teams (Data Engineering, BI, DevOps, Product)
🧠 Required Qualifications
• Strong experience in Data QA / ETL Testing / Data Engineering validation
• Hands-on experience with:
• AWS (Redshift, Glue, DMS, S3, Lambda, Step Functions, DynamoDB)
• SQL and data warehousing concepts
• Python for automation
• Apache Airflow and dbt
• Experience with data validation techniques (schema validation, reconciliation, anomaly detection)
• Familiarity with data quality tools like PySpark Deequ or similar
⭐ Preferred Qualifications
• Experience designing enterprise-level testing strategies for data platforms
• Knowledge of CI/CD pipelines for data and testing automation
• Experience with BI testing (e.g., QuickSight or similar tools)
• Familiarity with CloudFormation/CDK
• Experience working in Agile/Scrum environments
• Exposure to Gen AI or Node.js is a plus






