Openkyber

Kafka

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Google Cloud Platform Data Engineer in Austin, TX (Hybrid) for a 12+ month contract at $W2. Requires 8+ years in Data Engineering, expertise in GCP, BigQuery, Dataflow, and Kafka, along with strong programming skills.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 17, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Alaska
-
🧠 - Skills detailed
#Scala #Data Modeling #SQL (Structured Query Language) #Data Architecture #Data Analysis #Airflow #Cloud #Data Science #"ETL (Extract #Transform #Load)" #Storage #Apache Spark #Batch #Data Warehouse #Data Lake #GCP (Google Cloud Platform) #Java #Security #Kubernetes #BigQuery #Data Quality #Data Pipeline #Dataflow #Kafka (Apache Kafka) #Data Processing #Spark (Apache Spark) #Big Data #Python #Docker #Data Engineering #Programming
Role description
Job Title: Google Cloud Platform Data Engineer Location: Austin, TX (Hybrid) Domain: Retail / E-Commerce Duration: 12+ Months Contract Only W2, No C2C We are looking for an experienced Google Cloud Platform Data Engineer to design and build scalable data pipelines and modern data platforms on Google Cloud Platform (Google Cloud Platform). The candidate will support enterprise analytics and data-driven initiatives. Key Responsibilities: Design and develop data pipelines and ETL workflows using Google Cloud Platform services. Build scalable data processing systems using Dataflow and Dataproc. Develop and manage BigQuery data warehouses. Implement batch and streaming data pipelines. Work with data analysts and data scientists to enable advanced analytics and reporting. Ensure data quality, security, and governance standards. Optimize data pipeline performance and cost efficiency. Required Skills: 8+ years of experience in Data Engineering / Big Data Strong experience with Google Cloud Platform (Google Cloud Platform) Hands-on experience with BigQuery, Dataflow, Dataproc, Cloud Storage Experience with Apache Spark, Kafka Strong programming skills in Python, Java, or Scala Experience building ETL / ELT pipelines Strong knowledge of SQL and data modeling Preferred Skills: Experience with Cloud Composer (Airflow) Experience with Docker / Kubernetes Experience with data lakes and modern data architecture For applications and inquiries, contact: hirings@openkyber.com