

Odiin.
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown", offering a pay rate of "unknown". Key skills include Python, SQL, ETL processes, and experience with big data technologies and cloud platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 27, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Francisco, CA
-
🧠 - Skills detailed
#Data Quality #GCP (Google Cloud Platform) #NoSQL #Programming #Schema Design #Airflow #Scala #dbt (data build tool) #Big Data #Kafka (Apache Kafka) #PostgreSQL #"ETL (Extract #Transform #Load)" #Data Modeling #Database Systems #Azure #Databricks #Data Engineering #Data Pipeline #Spark (Apache Spark) #Luigi #Snowflake #Data Science #AWS (Amazon Web Services) #Cloud #Documentation #Data Analysis #Python #SQL (Structured Query Language) #MySQL #Hadoop #ML (Machine Learning) #Security
Role description
You’ll work closely with engineering, analytics, and product teams to ensure data is accurate, accessible, and efficiently processed across the organization.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and architectures.
• Collect, process, and transform data from multiple sources into structured, usable formats.
• Ensure data quality, reliability, and security across all systems.
• Work with data analysts and data scientists to optimize data models for analytics and machine learning.
• Implement ETL (Extract, Transform, Load) processes and automate workflows.
• Monitor and troubleshoot data infrastructure, ensuring minimal downtime and high performance.
• Collaborate with cross-functional teams to define data requirements and integrate new data sources.
• Maintain comprehensive documentation for data systems and processes.
Requirements:
• Proven experience as a Data Engineer, ETL Developer, or similar role.
• Strong programming skills in Python, SQL, or Scala.
• Experience with data pipeline tools (Airflow, dbt, Luigi, etc.).
• Familiarity with big data technologies (Spark, Hadoop, Kafka, etc.).
• Hands-on experience with cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks).
• Understanding of data modeling, warehousing, and schema design.
• Solid knowledge of database systems (PostgreSQL, MySQL, NoSQL).
• Strong analytical and problem-solving skills.
You’ll work closely with engineering, analytics, and product teams to ensure data is accurate, accessible, and efficiently processed across the organization.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and architectures.
• Collect, process, and transform data from multiple sources into structured, usable formats.
• Ensure data quality, reliability, and security across all systems.
• Work with data analysts and data scientists to optimize data models for analytics and machine learning.
• Implement ETL (Extract, Transform, Load) processes and automate workflows.
• Monitor and troubleshoot data infrastructure, ensuring minimal downtime and high performance.
• Collaborate with cross-functional teams to define data requirements and integrate new data sources.
• Maintain comprehensive documentation for data systems and processes.
Requirements:
• Proven experience as a Data Engineer, ETL Developer, or similar role.
• Strong programming skills in Python, SQL, or Scala.
• Experience with data pipeline tools (Airflow, dbt, Luigi, etc.).
• Familiarity with big data technologies (Spark, Hadoop, Kafka, etc.).
• Hands-on experience with cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks).
• Understanding of data modeling, warehousing, and schema design.
• Solid knowledge of database systems (PostgreSQL, MySQL, NoSQL).
• Strong analytical and problem-solving skills.






