

Motion Recruitment
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 6+ month W-2 contract, fully remote. Key skills include SQL, Python, Apache Airflow, and Hive. Requires 3+ years in data engineering and experience with data quality frameworks and large-scale data systems.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date
February 27, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Python #"ETL (Extract #Transform #Load)" #Airflow #Monitoring #AI (Artificial Intelligence) #Presto #Data Quality #Computer Science #Trino #Documentation #Datasets #Scala #Apache Airflow #Data Processing #Data Engineering #SQL (Structured Query Language) #Spark (Apache Spark) #SQL Queries #Complex Queries #Data Warehouse #Automation #Data Manipulation #Data Pipeline
Role description
Data never stops moving. Your work gives it direction, discipline, and trust.
About the Role
We are seeking a Data Engineer who thrives at the intersection of systems, logic, and scale. Someone who builds resilient data pipelines, safeguards data quality, and loves solving complex problems β whether in production or on a LeetCode challenge.
Location: Fully Remote (U.S. Based)
Duration: 6+ Month Contract
Type: W-2 Contract Only β C2C, third-party, or sponsorship arrangements are not supported at this time or in the future.
Responsibilities
β’ Design, build, and maintain scalable, reliable pipelines using Apache Airflow or similar orchestration tools.
β’ Develop and optimize advanced SQL queries to support analytics, reporting, and data validation.
β’ Work extensively with Hive and distributed query engines to process large datasets efficiently.
β’ Use Python to automate workflows, transform data, and improve pipeline reliability.
β’ Implement data quality monitoring, validation rules, and alerting to ensure trusted data delivery.
β’ Collaborate with technical and non-technical stakeholders to translate requirements into durable solutions.
β’ Own tasks end-to-end, managing priorities and delivering independently.
β’ Create and maintain clear technical documentation for maintainability and team knowledge sharing.
β’ Continuously optimize pipelines for performance, reliability, and scalability.
β’ Apply strong algorithmic thinking and problem-solving skills, ideally demonstrated through platforms like LeetCode, HackerRank, or similar.
Qualifications
β’ Expert in SQL, including complex queries, performance tuning, and large datasets.
β’ Strong hands-on experience with Python for automation and data manipulation.
β’ Experience building pipelines using Apache Airflow or similar orchestration tools.
β’ Hands-on experience with Hive and large-scale data warehouses.
β’ Familiarity with distributed data processing and performance optimization.
β’ Experience implementing data quality frameworks and alerting.
β’ Passion for problem-solving, algorithmic thinking, and coding challenges.
β’ Ability to work independently while collaborating across teams.
Preferred Skills
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or related field.
β’ 3+ years in data engineering or large-scale data systems.
β’ Familiarity with Presto/Trino, Spark, or other modern data tools.
β’ Experience applying automation or AI-driven approaches to optimize workflows.
Why This Role:
Youβll work where logic meets scale, transforming complexity into clarity. Youβll solve problems that matter, write code that endures, and pipelines that never sleep. If you love LeetCode, algorithms, and building systems people rely on, this is the place to thrive.
Equal Opportunity Statement
We are committed to diversity and inclusivity.
Data never stops moving. Your work gives it direction, discipline, and trust.
About the Role
We are seeking a Data Engineer who thrives at the intersection of systems, logic, and scale. Someone who builds resilient data pipelines, safeguards data quality, and loves solving complex problems β whether in production or on a LeetCode challenge.
Location: Fully Remote (U.S. Based)
Duration: 6+ Month Contract
Type: W-2 Contract Only β C2C, third-party, or sponsorship arrangements are not supported at this time or in the future.
Responsibilities
β’ Design, build, and maintain scalable, reliable pipelines using Apache Airflow or similar orchestration tools.
β’ Develop and optimize advanced SQL queries to support analytics, reporting, and data validation.
β’ Work extensively with Hive and distributed query engines to process large datasets efficiently.
β’ Use Python to automate workflows, transform data, and improve pipeline reliability.
β’ Implement data quality monitoring, validation rules, and alerting to ensure trusted data delivery.
β’ Collaborate with technical and non-technical stakeholders to translate requirements into durable solutions.
β’ Own tasks end-to-end, managing priorities and delivering independently.
β’ Create and maintain clear technical documentation for maintainability and team knowledge sharing.
β’ Continuously optimize pipelines for performance, reliability, and scalability.
β’ Apply strong algorithmic thinking and problem-solving skills, ideally demonstrated through platforms like LeetCode, HackerRank, or similar.
Qualifications
β’ Expert in SQL, including complex queries, performance tuning, and large datasets.
β’ Strong hands-on experience with Python for automation and data manipulation.
β’ Experience building pipelines using Apache Airflow or similar orchestration tools.
β’ Hands-on experience with Hive and large-scale data warehouses.
β’ Familiarity with distributed data processing and performance optimization.
β’ Experience implementing data quality frameworks and alerting.
β’ Passion for problem-solving, algorithmic thinking, and coding challenges.
β’ Ability to work independently while collaborating across teams.
Preferred Skills
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or related field.
β’ 3+ years in data engineering or large-scale data systems.
β’ Familiarity with Presto/Trino, Spark, or other modern data tools.
β’ Experience applying automation or AI-driven approaches to optimize workflows.
Why This Role:
Youβll work where logic meets scale, transforming complexity into clarity. Youβll solve problems that matter, write code that endures, and pipelines that never sleep. If you love LeetCode, algorithms, and building systems people rely on, this is the place to thrive.
Equal Opportunity Statement
We are committed to diversity and inclusivity.






