Tech Genius inc

Big Data Developer With GCP

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with GCP, requiring 6+ years of experience in Big Data technologies, Spark, Python, and GCP. Contract length is unspecified, with pay rate also unspecified. Work location is Phoenix, AZ or New York, NY.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 3, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#Security #Data Ingestion #Scrum #Datasets #"ETL (Extract #Transform #Load)" #Data Engineering #Data Quality #Big Data #Scala #BigQuery #GIT #Apache Spark #Storage #Automation #GCP (Google Cloud Platform) #Programming #SQL (Structured Query Language) #Agile #Computer Science #Data Pipeline #Data Processing #Jenkins #Python #Cloud #Dataflow #PySpark #Hadoop #Spark (Apache Spark)
Role description
Job Title: Big Data Developer With GCP Client: American Express (Amex) Location: Phoenix, AZ & New York, NY (Local candidates only, Must be comfortable for Face-to-Face interviews) Visa: OPT, H4 EAD, L2, Exp: 6+ Role Overview We are seeking an experienced Big Data Developer to join our client Amex. The ideal candidate will have strong expertise in Big Data technologies, Spark, Python, and GCP (Google Cloud Platform) to design, develop, and optimize large-scale data processing pipelines and enterprise applications. Key Responsibilities Design and develop scalable data pipelines and ETL processes using Spark and Python. Work on Big Data ecosystems for handling large datasets across distributed systems. Deploy, manage, and optimize applications on Google Cloud Platform (GCP) services like BigQuery, Dataproc, Dataflow, and Pub/Sub. Collaborate with business stakeholders and data engineers to deliver high-quality data solutions. Ensure data quality, reliability, and security across applications. Participate in Agile/Scrum ceremonies and contribute to sprint deliverables. Troubleshoot and resolve performance issues in data pipelines and distributed systems. Required Skills Strong hands-on experience in Big Data technologies (Hadoop/Spark ecosystem). Proficiency in Apache Spark (PySpark, DataFrames, RDDs). Strong Python programming skills for data engineering and automation. Hands-on experience with GCP services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage). Solid understanding of ETL frameworks, data ingestion, and data transformation. Knowledge of SQL and database optimization techniques. Experience with CI/CD pipelines, Git, Jenkins is a plus. Good problem-solving and communication skills. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5+ years of professional experience in Big Data development and data engineering. Prior experience in financial domain projects is a plus.