Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown," based remotely in the US. Key skills include Snowflake, SQL, Python, and ETL pipeline experience, with 5+ years in data engineering required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 28, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Architecture #Python #Airflow #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #ML (Machine Learning) #Data Pipeline #Kafka (Apache Kafka) #Data Engineering #SQL (Structured Query Language) #Snowflake #Observability #Documentation #Cloud #Data Quality #BigQuery #Redshift #dbt (data build tool) #Scala
Role description
About the job Role: Data Engineer Location: US (Remote) Tech Stack: Snowflake, SQL, Python, ETL Pipeline What You’ll Do β€’ Build and maintain scalable, secure data pipelines for analytics, insights, and ML. β€’ Influence data architecture with a focus on performance and reliability. β€’ Collaborate with product, engineering, and business teams to deliver impactful solutions. β€’ Drive data quality through validation, lineage, and observability best practices. β€’ Share knowledge via reviews, documentation, and mentorship. What We’re Looking For β€’ 5+ years in data engineering, ideally in fast-paced, product-focused settings. β€’ Expertise with cloud data platforms (Snowflake, BigQuery, Redshift) and orchestration tools (Airflow, dbt). β€’ Strong skills in Python or Scala, plus efficient SQL. β€’ Experience with streaming frameworks (Kafka, Spark Streaming). β€’ Proven ability to design data models for analytics and ML.