

Openkyber
GCP Support Associate
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Support Associate (Data Engineer) on a 6-month contract, with remote work required (PST/EST overlap). Key skills include PySpark, Databricks, Python, and Google Cloud Platform experience. 7+ years of data engineering experience preferred.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 28, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Georgia
-
🧠 - Skills detailed
#Cloud #dbt (data build tool) #Data Modeling #GCP (Google Cloud Platform) #Data Ingestion #Airflow #Datasets #Databricks #BigQuery #Spark (Apache Spark) #Python #PySpark #Storage #XML (eXtensible Markup Language) #Data Engineering #JSON (JavaScript Object Notation)
Role description
Data Engineer (Google Cloud Platform) (Contract) Multiple Openings
Type: Contract (6 months, likely extension, long term)
Location: Remote (PST/EST overlap required)
Technical Requirements Must-Have
- PySpark & Databricks: Strong hands-on experience building and maintaining production pipelines.
- Experience with Unity Catalog is a plus.
- Python Engineering: Primary development language with production-grade practices (typed, tested, modular code not notebook-only development).
- Google Cloud Platform Ecosystem: Proven experience with BigQuery, Dataproc, Airflow/Cloud Composer, Pub/Sub, and Cloud Storage (Parquet).
- Data Ingestion: Experience working with complex, multi-source datasets across varying schemas and formats (e.g., JSON, CSV, XML, custom feeds).
- Data Modeling: Strong understanding of staging and curated layer design, partitioning strategies, and schema evolution across distributed data sources.
Experience Level: 7+ years of hands-on data engineering experience with a track record of building and maintaining production systems (not research-focused profiles).
Nice to Have
- dbt: Experience working with dbt, ideally within a medallion/lakehouse architecture.
- Entity Resolution: Familiarity with record linkage techniques (fuzzy matching, phonetic similarity, Python-based frameworks).
- Domain Experience: Exposure to music, royalties, or media rights data is a plus.
For applications and inquiries, contact: hirings@openkyber.com
Data Engineer (Google Cloud Platform) (Contract) Multiple Openings
Type: Contract (6 months, likely extension, long term)
Location: Remote (PST/EST overlap required)
Technical Requirements Must-Have
- PySpark & Databricks: Strong hands-on experience building and maintaining production pipelines.
- Experience with Unity Catalog is a plus.
- Python Engineering: Primary development language with production-grade practices (typed, tested, modular code not notebook-only development).
- Google Cloud Platform Ecosystem: Proven experience with BigQuery, Dataproc, Airflow/Cloud Composer, Pub/Sub, and Cloud Storage (Parquet).
- Data Ingestion: Experience working with complex, multi-source datasets across varying schemas and formats (e.g., JSON, CSV, XML, custom feeds).
- Data Modeling: Strong understanding of staging and curated layer design, partitioning strategies, and schema evolution across distributed data sources.
Experience Level: 7+ years of hands-on data engineering experience with a track record of building and maintaining production systems (not research-focused profiles).
Nice to Have
- dbt: Experience working with dbt, ideally within a medallion/lakehouse architecture.
- Entity Resolution: Familiarity with record linkage techniques (fuzzy matching, phonetic similarity, Python-based frameworks).
- Domain Experience: Exposure to music, royalties, or media rights data is a plus.
For applications and inquiries, contact: hirings@openkyber.com




