

Data Engineer (Only W2)
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer (W2) for a 3-month remote contract, offering a pay rate of "unknown." Key skills include 5+ years in data engineering, strong SQL, Python, and Apache Airflow expertise, and experience with ETL processes and data modeling.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 27, 2025
π - Project duration
3 to 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Airflow #Mathematics #"ETL (Extract #Transform #Load)" #Data Science #Spark (Apache Spark) #Data Integrity #SQL (Structured Query Language) #Scala #Data Pipeline #Python #Data Quality #Statistics #Data Engineering #Documentation #Data Lake #Trino #Apache Airflow #Computer Science #Data Modeling #Data Processing #Code Reviews
Role description
-----NO C2C----NO C2C-----NO C2C----NO C2C-------
Terms of Employment
β’ W2 Contract, 3 Months
β’ Remote Opportunity
β’ Shift Schedule: 08:00 AM-05:00 PM
Overview
Data powers all the decisions we make. It is the core of our business, helping us create a great transportation experience for our customers, and providing insights into the effectiveness of our services and products. You will be part of Lyftβs core Rider team, focusing on developing ETL pipelines that support decision-making processes for demand, finance and competitive data. Your contributions will help data scientists, analysts, and business leaders make informed decisions that drive Lyftβs success.
You will need to evaluate multiple approaches, and implement solutions based on fundamental principles, best practices, and supporting data. By architecting, building, and launching robust data pipelines, you will enable seamless access to insights that fuel critical functions such as Analytics, Data Science, and Engineering.
Responsibilities
β’ Build core business data pipelines.
β’ Design data models and schemas to meet business and engineering requirements.
β’ Define and implement data quality checks to ensure ongoing data consistency.
β’ Perform SQL tuning to optimize data processing performance.
β’ Write clean, well-tested, and maintainable code, prioritizing scalability and cost efficiency.
β’ Conduct code reviews to uphold code quality standards.
β’ Produce high quality documentation to facilitate ownership transfer and ongoing support.
β’ Collaborate with internal and external partners to remove blockers, provide support, and achieve results.
β’ Develop, implement, and maintain robust Airflow ETL pipelines to support business and product decisions.
β’ Productionalize SQL and business logic, and perform data modeling to optimize data structures.
β’ Optimize and troubleshoot unhealthy pipelines to ensure data reliability and performance.
β’ Implement data quality validation frameworks to maintain data integrity.
β’ Build essential data feeds, such as a lifetime value (LTV) pipeline for writers, and create reliable data sources for dashboards.
β’ Collaborate closely with data scientists and engineers on the writer team to support their data needs.
β’ Work with the central data platform team for technical guidance and code reviews.
Required Skills & Experience
β 5+ years of professional experience in data engineering or a related field.
β Strong expertise in SQL and experience with Spark and/or Trino.
β Proficiency in Python.
β Strong data modelling skills and a deep understanding of ETL processes.
β Experience building and optimizing complex data models and pipelines.
β Extensive experience with Apache Airflow for building and managing ETL pipelines.
β Familiarity with the modern data stack, including Hive-based or partitioned data lake structures.
β Strong sense of ownership and accountability for your work.
β Excellent critical thinking and problem-solving skills.
Preferred Skills & Experience
β’ Experience working directly with cross-functional teams (data analytics, data science, engineering) to align data engineering solutions with business goals.
β’ Experience working at a large, well-known tech company (e.g., Meta, Google, Netflix), especially with consumer-facing products.
β’ Bachelor of Science in Computer Science or a related field (e.g., Statistics, Mathematics, Electrical Engineering).
β’ Experience working on diverse, multi-project backlogs and adapting to changing priorities.
β’ Familiarity with data quality frameworks and data validation best practices.
-----NO C2C----NO C2C-----NO C2C----NO C2C-------
Terms of Employment
β’ W2 Contract, 3 Months
β’ Remote Opportunity
β’ Shift Schedule: 08:00 AM-05:00 PM
Overview
Data powers all the decisions we make. It is the core of our business, helping us create a great transportation experience for our customers, and providing insights into the effectiveness of our services and products. You will be part of Lyftβs core Rider team, focusing on developing ETL pipelines that support decision-making processes for demand, finance and competitive data. Your contributions will help data scientists, analysts, and business leaders make informed decisions that drive Lyftβs success.
You will need to evaluate multiple approaches, and implement solutions based on fundamental principles, best practices, and supporting data. By architecting, building, and launching robust data pipelines, you will enable seamless access to insights that fuel critical functions such as Analytics, Data Science, and Engineering.
Responsibilities
β’ Build core business data pipelines.
β’ Design data models and schemas to meet business and engineering requirements.
β’ Define and implement data quality checks to ensure ongoing data consistency.
β’ Perform SQL tuning to optimize data processing performance.
β’ Write clean, well-tested, and maintainable code, prioritizing scalability and cost efficiency.
β’ Conduct code reviews to uphold code quality standards.
β’ Produce high quality documentation to facilitate ownership transfer and ongoing support.
β’ Collaborate with internal and external partners to remove blockers, provide support, and achieve results.
β’ Develop, implement, and maintain robust Airflow ETL pipelines to support business and product decisions.
β’ Productionalize SQL and business logic, and perform data modeling to optimize data structures.
β’ Optimize and troubleshoot unhealthy pipelines to ensure data reliability and performance.
β’ Implement data quality validation frameworks to maintain data integrity.
β’ Build essential data feeds, such as a lifetime value (LTV) pipeline for writers, and create reliable data sources for dashboards.
β’ Collaborate closely with data scientists and engineers on the writer team to support their data needs.
β’ Work with the central data platform team for technical guidance and code reviews.
Required Skills & Experience
β 5+ years of professional experience in data engineering or a related field.
β Strong expertise in SQL and experience with Spark and/or Trino.
β Proficiency in Python.
β Strong data modelling skills and a deep understanding of ETL processes.
β Experience building and optimizing complex data models and pipelines.
β Extensive experience with Apache Airflow for building and managing ETL pipelines.
β Familiarity with the modern data stack, including Hive-based or partitioned data lake structures.
β Strong sense of ownership and accountability for your work.
β Excellent critical thinking and problem-solving skills.
Preferred Skills & Experience
β’ Experience working directly with cross-functional teams (data analytics, data science, engineering) to align data engineering solutions with business goals.
β’ Experience working at a large, well-known tech company (e.g., Meta, Google, Netflix), especially with consumer-facing products.
β’ Bachelor of Science in Computer Science or a related field (e.g., Statistics, Mathematics, Electrical Engineering).
β’ Experience working on diverse, multi-project backlogs and adapting to changing priorities.
β’ Familiarity with data quality frameworks and data validation best practices.