

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown," based remotely in the US. Key skills include Snowflake, SQL, Python, and ETL pipeline experience, with 5+ years in data engineering required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 28, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Architecture #Python #Airflow #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #ML (Machine Learning) #Data Pipeline #Kafka (Apache Kafka) #Data Engineering #SQL (Structured Query Language) #Snowflake #Observability #Documentation #Cloud #Data Quality #BigQuery #Redshift #dbt (data build tool) #Scala
Role description
About the job
Role: Data Engineer
Location: US (Remote)
Tech Stack: Snowflake, SQL, Python, ETL Pipeline
What Youβll Do
β’ Build and maintain scalable, secure data pipelines for analytics, insights, and ML.
β’ Influence data architecture with a focus on performance and reliability.
β’ Collaborate with product, engineering, and business teams to deliver impactful solutions.
β’ Drive data quality through validation, lineage, and observability best practices.
β’ Share knowledge via reviews, documentation, and mentorship.
What Weβre Looking For
β’ 5+ years in data engineering, ideally in fast-paced, product-focused settings.
β’ Expertise with cloud data platforms (Snowflake, BigQuery, Redshift) and orchestration tools (Airflow, dbt).
β’ Strong skills in Python or Scala, plus efficient SQL.
β’ Experience with streaming frameworks (Kafka, Spark Streaming).
β’ Proven ability to design data models for analytics and ML.
About the job
Role: Data Engineer
Location: US (Remote)
Tech Stack: Snowflake, SQL, Python, ETL Pipeline
What Youβll Do
β’ Build and maintain scalable, secure data pipelines for analytics, insights, and ML.
β’ Influence data architecture with a focus on performance and reliability.
β’ Collaborate with product, engineering, and business teams to deliver impactful solutions.
β’ Drive data quality through validation, lineage, and observability best practices.
β’ Share knowledge via reviews, documentation, and mentorship.
What Weβre Looking For
β’ 5+ years in data engineering, ideally in fast-paced, product-focused settings.
β’ Expertise with cloud data platforms (Snowflake, BigQuery, Redshift) and orchestration tools (Airflow, dbt).
β’ Strong skills in Python or Scala, plus efficient SQL.
β’ Experience with streaming frameworks (Kafka, Spark Streaming).
β’ Proven ability to design data models for analytics and ML.