nTech Workforce

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 6-month W2 contract, offering remote work (Colorado candidates preferred). Requires 5+ years of experience, strong SQL, Python, and Apache Airflow skills, with a focus on ETL processes and data modeling.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 7, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Colorado, United States
-
🧠 - Skills detailed
#Airflow #Scala #Statistics #Data Quality #Mathematics #Data Processing #"ETL (Extract #Transform #Load)" #Computer Science #Data Modeling #BI (Business Intelligence) #Spark (Apache Spark) #SQL (Structured Query Language) #Data Lake #Data Science #Data Integrity #Python #Code Reviews #Documentation #Apache Airflow #Data Pipeline #Trino #Data Engineering
Role description
nTech Workforce has an immediate 22873 Data Engineer Terms of Employment • W2 Contract,6 Months • Remote Opportunity (Candidates from Colorado area highly preferred) • Shift Schedule: 08:00 AM-05:00 PM Overview • Here at , Data powers all the decisions we make. It is the core of our business, helping us create a great transportation experience for our customers, and providing insights into the effectiveness of our services and products. You will be part of Lyft’s core Rider team, focusing on developing ETL pipelines that support decision-making processes for demand, finance and competitive data. Your contributions will help data scientists, analysts, and business leaders make informed decisions that drive success. • You will need to evaluate multiple approaches, and implement solutions based on fundamental principles, best practices, and supporting data. By architecting, building, and launching robust data pipelines, you will enable seamless access to insights that fuel critical functions such as Analytics, Data Science, and Engineering. Responsibilities • Build core business data pipelines. • Design data models and schemas to meet business and engineering requirements. • Define and implement data quality checks to ensure ongoing data consistency. • Perform SQL tuning to optimize data processing performance. • Write clean, well-tested, and maintainable code, prioritizing scalability and cost efficiency. • Conduct code reviews to uphold code quality standards. • Produce high quality documentation to facilitate ownership transfer and ongoing support. • Collaborate with internal and external partners to remove blockers, provide support, and achieve results. • Develop, implement, and maintain robust Airflow ETL pipelines to support business and product decisions. • Productionalize SQL and business logic, and perform data modeling to optimize data structures. • Optimize and troubleshoot unhealthy pipelines to ensure data reliability and performance. • Implement data quality validation frameworks to maintain data integrity. • Build essential data feeds, such as a lifetime value (LTV) pipeline for writers, and create reliable data sources for dashboards. • Collaborate closely with data scientists and engineers on the writer team to support their data needs. • Work with the central data platform team for technical guidance and code reviews. Required Skills & Experience • 5+ years of professional experience in data engineering or a related field. • Strong expertise in SQL and experience with Spark and/or Trino. • Proficiency in Python. • Strong data modelling skills and a deep understanding of ETL processes. • Experience building and optimizing complex data models and pipelines. • Extensive experience with Apache Airflow for building and managing ETL pipelines. • Familiarity with the modern data stack, including Hive-based or partitioned data lake structures. • Strong sense of ownership and accountability for your work. • Excellent critical thinking and problem-solving skills. Preferred Skills & Experience • Experience working directly with cross-functional teams (data analytics, data science, engineering) to align data engineering solutions with business goals. • Experience working at a large, well-known tech company (e.g., Meta, Google, Netflix), especially with consumer-facing products. • Bachelor of Science in Computer Science or a related field (e.g., Statistics, Mathematics, Electrical Engineering). • Experience working on diverse, multi-project backlogs and adapting to changing priorities. • Familiarity with data quality frameworks and data validation best practices. Benefits Information • Medical Insurance; Vision Insurance; Dental Insurance • 401K Retirement Plan (Discretionary Match Offered) • Ancillary Coverage (Life, AD&D, Short Term / Long Term Disability) • Employee Referral Bonus • Bi-Weekly Direct Deposit • Note: As a contingent worker with nTech, you'll be paid for all approved hours worked; paid time off and paid holidays are not provided. nTech is an equal opportunity employer. All offers of employment are contingent upon pre-employment drug and background screenings. Only candidates who meet all of the above client requirements will be contacted by a recruiter.