

GCP Big Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Big Data Engineer in Philadelphia, PA, with a contract length of unspecified duration. The pay rate is $62 W2 or $70 C2C. Candidates must have 8+ years of experience in Big Data engineering, expertise in GCP, and proficiency in PySpark and Apache Beam.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date discovered
August 2, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Pennsylvania, United States
-
π§ - Skills detailed
#Distributed Computing #SQL (Structured Query Language) #PySpark #Dataflow #Java #BigQuery #Storage #GCP (Google Cloud Platform) #Data Pipeline #Big Data #Airflow #Data Processing #Kafka (Apache Kafka) #Python #Scala #"ETL (Extract #Transform #Load)" #Google Analytics #Clustering #Monitoring #Cloud #Data Engineering #Data Science #Datasets #ML (Machine Learning) #Apache Beam #Spark (Apache Spark) #Data Integration #Batch
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: GCP Big Data Engineer
Location: Philadelphia, PA(Onsite)
Exp :Β 8+
Client: UMG
Rate: $62 on W2 and $70 on C2C
Net 60 days payment terms
Job Description:
We are seeking a highly skilled Senior GCP Big Data Engineer to join our team and work on cutting-edge data solutions for our client, UMG. The ideal candidate will have extensive experience in designing, building, and optimizing large-scale data pipelines on Google Cloud Platform (GCP) using PySpark, Apache Beam, Dataflow, and BigQuery. You will play a key role in processing and analyzing massive datasets to drive business insights.
Key Responsibilities:
β’ Design, develop, and optimize scalable Big Data pipelines using GCP services (Dataflow, BigQuery, Pub/Sub, Cloud Storage).
β’ Implement real-time and batch data processing using Apache Beam, Spark, and PySpark.
β’ Work with Kafka for event streaming and data integration.
β’ Orchestrate workflows using Airflow for scheduling and monitoring data pipelines.
β’ Write efficient Java/Python code for data processing and transformation.
β’ Optimize BigQuery performance, including partitioning, clustering, and query tuning.
β’ Collaborate with data scientists and analysts to enable advanced analytics and machine learning pipelines.
β’ Ensure data reliability, quality, and governance across pipelines.
β’ Leverage Google Analytics and GFO (Google for Organizations) where applicable.
Required Skills & Experience:
β’ 8+ years of hands-on experience in Big Data engineering.
β’ Strong expertise in GCP (Google Cloud Platform) β Dataflow, BigQuery, Pub/Sub, Cloud Storage.
β’ Proficiency in PySpark, Spark, and Java/Scala for Big Data processing.
β’ Experience with Apache Beam for unified batch/streaming pipelines.
β’ Solid understanding of Kafka for real-time data streaming.
β’ Hands-on experience with Airflow for workflow orchestration.
β’ Strong SQL skills and optimization techniques in BigQuery.
β’ Experience with distributed computing and performance tuning.
Good to Have:
β’ Knowledge of GFO (Google for Organizations).
β’ Familiarity with Google Analytics for data integration.
β’ Experience with DataProc, Cloud Composer, or Dataprep.
β’ Understanding of CI/CD pipelines for Big Data applications.