LeadStack Inc.

Senior Data Engineer - 25-03290

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (25-03290) in San Francisco, CA (Hybrid) for 6 months at $80/hr - $100/hr. Requires 5+ years in data engineering, strong SQL, Python, Spark, ETL, and Airflow expertise.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
800
-
πŸ—“οΈ - Date
November 5, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
San Francisco, CA
-
🧠 - Skills detailed
#Data Quality #Code Reviews #Airflow #Scala #Spark (Apache Spark) #Data Pipeline #"ETL (Extract #Transform #Load)" #Trino #Data Engineering #Python #SQL (Structured Query Language) #Documentation #Data Science #Data Processing
Role description
Job Title: Data Engineer Duration: 06 months with possible extension Location: San Francisco, CA (Hybrid) PR: $80/hr - $100/hr Job Description: You will be part of Client’s team, focusing on developing ETL pipelines that support decision-making processes for demand, finance and competitive data. Your contributions will help data scientists, analysts, and business leaders make informed decisions that drive Client’s success. You will need to evaluate multiple approaches, and implement solutions based on fundamental principles, best practices, and supporting data. By architecting, building, and launching robust data pipelines, you will enable seamless access to insights that fuel critical functions such as Analytics, Data Science, and Engineering. Responsibilities: β€’ Build core business data pipelines β€’ Design data models and schemas to meet business and engineering requirements β€’ Define and implement data quality checks to ensure ongoing data consistency β€’ Perform SQL tuning to optimize data processing performance β€’ Write clean, well-tested, and maintainable code, prioritizing scalability and cost efficiency β€’ Conduct code reviews to uphold code quality standards β€’ Produce high quality documentation to facilitate ownership transfer and ongoing support β€’ Collaborate with internal and external partners to remove blockers, provide support, and achieve results Experience: β€’ 5+ years of professional experience in data engineering or a related field β€’ Strong expertise in SQL and experience with Spark and/or Trino β€’ Proficiency in Python β€’ Strong data modelling skills and a deep understanding of ETL processes β€’ Experience building and optimizing complex data models and pipelines β€’ Hands-on experience with Airflow Nice to have experience: β€’ Working directly with cross-functional teams (data analytics, data science, engineering) to align data engineering solutions with business goals