

Curate Partners
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Junior to Mid-Level Data Engineer, contract length TBD, with a pay rate of "TBD". Location is remote. Key skills required include Python, SQL, GCP, and Spark. 3-5 years of Data Engineering experience is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 31, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Cloud #ML (Machine Learning) #Data Quality #Data Engineering #Airflow #Scala #Libraries #Python #Data Modeling #GCP (Google Cloud Platform) #Scripting #Spark (Apache Spark) #Data Pipeline #Data Processing #Data Transformations #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Apache Airflow #Automation #AI (Artificial Intelligence)
Role description
Junior - Mid Level Data Engineer
Start: ASAP; Green Card or US Citizen
Skills:
-Python (and python libraries)
-SQL
-GCP (be able to create Dags)
-Spark
-working with ML Libraries is a plus
Overview
We are seeking an junior to mid-level Data Engineer to join our engineering team and help design and build scalable data pipelines and automation frameworks that support analytics, quality engineering, and platform reliability initiatives.
This is a hands-on engineering role. You will focus on building data workflows, orchestration pipelines, and automation systems using modern cloud and data engineering tools within GCP.
What You’ll Do
• Design, build, and maintain data pipelines and ETL/ELT workflows
• Develop and schedule Airflow DAGs for automated processing
• Write scalable data transformations using Python, SQL, and Spark
• Work within GCP services to deploy and manage data workflows
• Build automation frameworks to support data validation and system reliability
• Partner with engineering teams to improve data quality and operational efficiency
• Contribute to POCs and internal tools, including AI/ML or data-driven initiatives when applicable
• Support occasional cross-team efforts when high-priority delivery requires collaboration
Required Skills
• 3 -5 years of hands-on experience in Data Engineering or Data Platforms
• Strong Python (data processing, scripting, automation)
• Strong SQL
• Experience building ETL/ELT pipelines
• Experience with Apache Airflow (DAG creation, scheduling, orchestration)
• Familiarity with Google Cloud Platform (GCP) services
• Basic experience with Spark or distributed data processing
• Understanding of data modeling and pipeline best practices
Junior - Mid Level Data Engineer
Start: ASAP; Green Card or US Citizen
Skills:
-Python (and python libraries)
-SQL
-GCP (be able to create Dags)
-Spark
-working with ML Libraries is a plus
Overview
We are seeking an junior to mid-level Data Engineer to join our engineering team and help design and build scalable data pipelines and automation frameworks that support analytics, quality engineering, and platform reliability initiatives.
This is a hands-on engineering role. You will focus on building data workflows, orchestration pipelines, and automation systems using modern cloud and data engineering tools within GCP.
What You’ll Do
• Design, build, and maintain data pipelines and ETL/ELT workflows
• Develop and schedule Airflow DAGs for automated processing
• Write scalable data transformations using Python, SQL, and Spark
• Work within GCP services to deploy and manage data workflows
• Build automation frameworks to support data validation and system reliability
• Partner with engineering teams to improve data quality and operational efficiency
• Contribute to POCs and internal tools, including AI/ML or data-driven initiatives when applicable
• Support occasional cross-team efforts when high-priority delivery requires collaboration
Required Skills
• 3 -5 years of hands-on experience in Data Engineering or Data Platforms
• Strong Python (data processing, scripting, automation)
• Strong SQL
• Experience building ETL/ELT pipelines
• Experience with Apache Airflow (DAG creation, scheduling, orchestration)
• Familiarity with Google Cloud Platform (GCP) services
• Basic experience with Spark or distributed data processing
• Understanding of data modeling and pipeline best practices






