nTech Workforce

ETL Data Engineer - AirFlow

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an ETL Data Engineer with 5+ years of experience, focusing on Airflow and SQL, for a 6-month W2 contract. Remote work is available, but candidates must be in the Pacific Time Zone, preferably near San Francisco, CA.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 5, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
San Francisco, CA
-
🧠 - Skills detailed
#Data Lake #Apache Airflow #Scala #Statistics #SQL (Structured Query Language) #Airflow #Mathematics #"ETL (Extract #Transform #Load)" #Data Integrity #Python #Data Science #Data Quality #Code Reviews #Spark (Apache Spark) #Data Pipeline #Documentation #Trino #Computer Science #Data Engineering #Data Modeling #Data Processing
Role description
------------ONLY W2------------------ONLY W2-----------------ONLY W2--------------- β€’ β€’ β€’ β€’ β€’ β€’ β€’ β€’ β€’ Candidates must be from Pacific Time Zone. β€’ β€’ β€’ β€’ β€’ β€’ β€’ β€’ β€’ β€’ β€’ Title: ETL Data Engineer Location: San Francisco, CA 94107 (Remote) Duration: 6 Months Contract Job Description: Terms of Employment β€’ W2 Contract, 6 Months β€’ Remote Opportunity (Candidates from San Francisco area highly preferred) β€’ Shift Schedule: 08:00 AM-05:00 PM Overview: β€’ Data powers all the decisions we make. It is the core of our business, helping us create a great transportation experience for our customers, and providing insights into the effectiveness of our services and products. You will be part of core Rider team, focusing on developing ETL pipelines that support decision-making processes for demand, finance and competitive data. Your contributions will help data scientists, analysts, and business leaders make informed decisions that drive success. β€’ You will need to evaluate multiple approaches, and implement solutions based on fundamental principles, best practices, and supporting data. By architecting, building, and launching robust data pipelines, you will enable seamless access to insights that fuel critical functions such as Analytics, Data Science, and Engineering. Responsibilities β€’ Build core business data pipelines. β€’ Design data models and schemas to meet business and engineering requirements. β€’ Define and implement data quality checks to ensure ongoing data consistency. β€’ Perform SQL tuning to optimize data processing performance. β€’ Write clean, well-tested, and maintainable code, prioritizing scalability and cost efficiency. β€’ Conduct code reviews to uphold code quality standards. β€’ Produce high quality documentation to facilitate ownership transfer and ongoing support. β€’ Collaborate with internal and external partners to remove blockers, provide support, and achieve results. β€’ Develop, implement, and maintain robust Airflow ETL pipelines to support business and product decisions. β€’ Productionalize SQL and business logic, and perform data modeling to optimize data structures. β€’ Optimize and troubleshoot unhealthy pipelines to ensure data reliability and performance. β€’ Implement data quality validation frameworks to maintain data integrity. β€’ Build essential data feeds, such as a lifetime value (LTV) pipeline for writers, and create reliable data sources for dashboards. β€’ Collaborate closely with data scientists and engineers on the writer team to support their data needs. β€’ Work with the central data platform team for technical guidance and code reviews. Required Skills & Experience β€’ 5+ years of professional experience in data engineering or a related field. β€’ Strong expertise in SQL and experience with Spark and/or Trino. β€’ Proficiency in Python. β€’ Strong data modelling skills and a deep understanding of ETL processes. β€’ Experience building and optimizing complex data models and pipelines. β€’ Extensive experience with Apache Airflow for building and managing ETL pipelines. β€’ Familiarity with the modern data stack, including Hive-based or partitioned data lake structures. β€’ Strong sense of ownership and accountability for your work. β€’ Excellent critical thinking and problem-solving skills. Preferred Skills & Experience β€’ Experience working directly with cross-functional teams (data analytics, data science, engineering) to align data engineering solutions with business goals. β€’ Experience working at a large, well-known tech company (e.g., Meta, Google, Netflix), especially with consumer-facing products. β€’ Bachelor of Science in Computer Science or a related field (e.g., Statistics, Mathematics, Electrical Engineering). β€’ Experience working on diverse, multi-project backlogs and adapting to changing priorities. β€’ Familiarity with data quality frameworks and data validation best practices. Sincerely,Preetam Raj Lead Technical Recruiter nTech Workforce Inc. D: 410-505-4857 EXT: 726 E: preetam@ntechworkforce.com preetam(at)ntechworkforce(dot)com