

Odiin.
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown," located in "unknown." Key skills include Python, SQL, ETL processes, and experience with cloud platforms like AWS or GCP.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 9, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Francisco, CA
-
🧠 - Skills detailed
#Scala #Schema Design #Documentation #Data Engineering #Big Data #Spark (Apache Spark) #Azure #NoSQL #Database Systems #Snowflake #Luigi #Data Analysis #PostgreSQL #GCP (Google Cloud Platform) #Airflow #Data Science #Python #MySQL #Cloud #Databricks #Data Modeling #Kafka (Apache Kafka) #Security #Programming #SQL (Structured Query Language) #Data Pipeline #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #dbt (data build tool) #ML (Machine Learning) #Hadoop #Data Quality
Role description
You’ll work closely with engineering, analytics, and product teams to ensure data is accurate, accessible, and efficiently processed across the organization.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and architectures.
• Collect, process, and transform data from multiple sources into structured, usable formats.
• Ensure data quality, reliability, and security across all systems.
• Work with data analysts and data scientists to optimize data models for analytics and machine learning.
• Implement ETL (Extract, Transform, Load) processes and automate workflows.
• Monitor and troubleshoot data infrastructure, ensuring minimal downtime and high performance.
• Collaborate with cross-functional teams to define data requirements and integrate new data sources.
• Maintain comprehensive documentation for data systems and processes.
Requirements:
• Proven experience as a Data Engineer, ETL Developer, or similar role.
• Strong programming skills in Python, SQL, or Scala.
• Experience with data pipeline tools (Airflow, dbt, Luigi, etc.).
• Familiarity with big data technologies (Spark, Hadoop, Kafka, etc.).
• Hands-on experience with cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks).
• Understanding of data modeling, warehousing, and schema design.
• Solid knowledge of database systems (PostgreSQL, MySQL, NoSQL).
• Strong analytical and problem-solving skills.
You’ll work closely with engineering, analytics, and product teams to ensure data is accurate, accessible, and efficiently processed across the organization.
Key Responsibilities:
• Design, develop, and maintain scalable data pipelines and architectures.
• Collect, process, and transform data from multiple sources into structured, usable formats.
• Ensure data quality, reliability, and security across all systems.
• Work with data analysts and data scientists to optimize data models for analytics and machine learning.
• Implement ETL (Extract, Transform, Load) processes and automate workflows.
• Monitor and troubleshoot data infrastructure, ensuring minimal downtime and high performance.
• Collaborate with cross-functional teams to define data requirements and integrate new data sources.
• Maintain comprehensive documentation for data systems and processes.
Requirements:
• Proven experience as a Data Engineer, ETL Developer, or similar role.
• Strong programming skills in Python, SQL, or Scala.
• Experience with data pipeline tools (Airflow, dbt, Luigi, etc.).
• Familiarity with big data technologies (Spark, Hadoop, Kafka, etc.).
• Hands-on experience with cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks).
• Understanding of data modeling, warehousing, and schema design.
• Solid knowledge of database systems (PostgreSQL, MySQL, NoSQL).
• Strong analytical and problem-solving skills.