

GCP Big Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Big Data Engineer on a freelance contract, offering competitive pay. Key skills include GCP services, Apache Beam, Spark, and Kafka. Experience in data engineering and strong Java/Python coding skills are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Philadelphia, PA
-
π§ - Skills detailed
#Monitoring #Storage #Data Pipeline #BigQuery #Cloud #Data Processing #Clustering #ML (Machine Learning) #Data Integration #"ETL (Extract #Transform #Load)" #Google Analytics #Java #Data Science #Apache Beam #Python #Big Data #PySpark #Dataflow #Data Engineering #Spark (Apache Spark) #Airflow #Scala #Kafka (Apache Kafka) #Batch #GCP (Google Cloud Platform)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
β’ Design, develop, and optimize scalable Big Data pipelines using GCP services (Dataflow, BigQuery, Pub/Sub, Cloud Storage).
β’ Implement real-time and batch data processing using Apache Beam, Spark, and PySpark.
β’ Work with Kafka for event streaming and data integration.
β’ Orchestrate workflows using Airflow for scheduling and monitoring data pipelines.
β’ Write efficient Java/Python code for data processing and transformation.
β’ Optimize BigQuery performance, including partitioning, clustering, and query tuning.
β’ Collaborate with data scientists and analysts to enable advanced analytics and machine learning pipelines.
β’ Ensure data reliability, quality, and governance across pipelines.
β’ Leverage Google Analytics and GFO (Google for Organizations) where applicable.