

Tech Genius inc
Big Data Developer With GCP on Our W2
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with GCP, requiring 8+ years of experience in Big Data technologies, Spark, Python, and GCP. The contract is located in Phoenix, AZ or New York, NY, with a pay rate of "unknown."
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 25, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Phoenix, AZ
-
π§ - Skills detailed
#Computer Science #Data Ingestion #Spark (Apache Spark) #SQL (Structured Query Language) #Automation #Jenkins #Data Engineering #Scrum #Dataflow #Datasets #PySpark #Apache Spark #GCP (Google Cloud Platform) #BigQuery #Programming #Cloud #Data Quality #Big Data #Data Pipeline #"ETL (Extract #Transform #Load)" #Scala #Data Processing #Agile #Storage #Python #Security #GIT #Hadoop
Role description
Job Title: Big Data Developer With GCP
Client: American Express (Amex)
Location: Phoenix, AZ & New York, NYΒ (Local or nearby)
Visa: OPT, H4 EAD, L2,
Exp: 8+
Role Overview
We are seeking an experienced Big Data Developer to join our client Amex. The ideal candidate will have strong expertise in Big Data technologies, Spark, Python, and GCP (Google Cloud Platform) to design, develop, and optimize large-scale data processing pipelines and enterprise applications.
Key Responsibilities
Design and develop scalable data pipelines and ETL processes using Spark and Python.
Work on Big Data ecosystems for handling large datasets across distributed systems.
Deploy, manage, and optimize applications on Google Cloud Platform (GCP) services like BigQuery, Dataproc, Dataflow, and Pub/Sub.
Collaborate with business stakeholders and data engineers to deliver high-quality data solutions.
Ensure data quality, reliability, and security across applications.
Participate in Agile/Scrum ceremonies and contribute to sprint deliverables.
Troubleshoot and resolve performance issues in data pipelines and distributed systems.
Required Skills
Strong hands-on experience in Big Data technologies (Hadoop/Spark ecosystem).
Proficiency in Apache Spark (PySpark, DataFrames, RDDs).
Strong Python programming skills for data engineering and automation.
Hands-on experience with GCP services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
Solid understanding of ETL frameworks, data ingestion, and data transformation.
Knowledge of SQL and database optimization techniques.
Experience with CI/CD pipelines, Git, Jenkins is a plus.
Good problem-solving and communication skills.
Qualifications
Bachelorβs or Masterβs degree in Computer Science, Engineering, or related field.
7+ years of professional experience in Big Data development and data engineering.
Prior experience in financial domain projects is a plus
Job Title: Big Data Developer With GCP
Client: American Express (Amex)
Location: Phoenix, AZ & New York, NYΒ (Local or nearby)
Visa: OPT, H4 EAD, L2,
Exp: 8+
Role Overview
We are seeking an experienced Big Data Developer to join our client Amex. The ideal candidate will have strong expertise in Big Data technologies, Spark, Python, and GCP (Google Cloud Platform) to design, develop, and optimize large-scale data processing pipelines and enterprise applications.
Key Responsibilities
Design and develop scalable data pipelines and ETL processes using Spark and Python.
Work on Big Data ecosystems for handling large datasets across distributed systems.
Deploy, manage, and optimize applications on Google Cloud Platform (GCP) services like BigQuery, Dataproc, Dataflow, and Pub/Sub.
Collaborate with business stakeholders and data engineers to deliver high-quality data solutions.
Ensure data quality, reliability, and security across applications.
Participate in Agile/Scrum ceremonies and contribute to sprint deliverables.
Troubleshoot and resolve performance issues in data pipelines and distributed systems.
Required Skills
Strong hands-on experience in Big Data technologies (Hadoop/Spark ecosystem).
Proficiency in Apache Spark (PySpark, DataFrames, RDDs).
Strong Python programming skills for data engineering and automation.
Hands-on experience with GCP services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
Solid understanding of ETL frameworks, data ingestion, and data transformation.
Knowledge of SQL and database optimization techniques.
Experience with CI/CD pipelines, Git, Jenkins is a plus.
Good problem-solving and communication skills.
Qualifications
Bachelorβs or Masterβs degree in Computer Science, Engineering, or related field.
7+ years of professional experience in Big Data development and data engineering.
Prior experience in financial domain projects is a plus






