

BRATHON
GCP Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer in Phoenix, AZ, on a long-term contract. Key skills include strong SQL, BigQuery, Python, and Tableau. Experience in data engineering and visualization, along with familiarity with GCP, is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 16, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#Datasets #Cloud #Spark (Apache Spark) #Data Quality #SQL (Structured Query Language) #Data Pipeline #GCP (Google Cloud Platform) #Microsoft Power BI #API (Application Programming Interface) #ML (Machine Learning) #Tableau #Data Engineering #PySpark #BigQuery #BI (Business Intelligence) #Python #Visualization #Requirements Gathering #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence)
Role description
Job Title: GCP Data Engineer
Location: Phoenix, AZ (Hybrid)
Job Type: Contract Long Term
1. Business Division & Project Summary
Business Division: Enterprise Business Intelligence (EBI) – AMEX
Position supports AMEX’s Enterprise Business Intelligence (EBI) organization, specifically the Experience Analytics (XP) team.
This division focuses on:
• Marketing analytics and campaign intelligence
• Building data pipelines, ETL processes, and insight delivery mechanisms
• Delivering insights for:
• AMEX‑initiated campaigns
• Merchant‑partner campaigns
• Campaign creative journeys & customer experience pathways
• Enabling multiple reporting/insight delivery channels:
• Dashboards (Tableau)
• API‑based insights
• Report distribution
• Audio‑layer reporting (narrative insights)
• Project Summary (New Workload Coming)
A new upcoming initiative is driving the need for hiring. The details are not confirmed yet, but likely include:
• Creation of a completely new data pipeline
• Development of new visualization/reporting layers for this pipeline
• Work within GCP ecosystem — BigQuery, Cloud Composer, DataProc, Spark, Python
1. Role Summary & Responsibilities
The requirement is a hybrid role combining:
A. Data Engineering (Primary – ~60–70%)
Key responsibilities:
• Build and maintain ETL pipelines
• Work within AMEX’s GCP ecosystem:
• BigQuery (primary analytics database)
• Cloud Composer (orchestration)
• DataProc (Spark/Python‑based ETL jobs)
• Data curation, aggregation, and transformation (SQL-heavy)
• Potential work with PySpark depending on pipeline design
• Implement AI‑powered data quality checks (future roadmap)
• Contribute to domain-specific datasets (marketing analytics)
Tech stack mentioned:
• Python
• PySpark
• SQL
• BigQuery
• GCP (Composer, DataProc)
B. Data Visualization / BI Engineering (~30–40%)
Responsibilities:
• Build dashboards for marketing analytics
• Visualize campaign journeys, insights & trends
• Use Tableau primarily (Power BI is deprioritized)
• Support multiple insight delivery formats:
• Dashboards
• API-based insight delivery
• Narrative/Audio insights
• Convert analytical output into consumable business insights
Tools:
• Tableau
• (Optional) Power BI
• APIs (for insight publishing)
1. Expectations & Role Fit Criteria
A. Core Expectations
• Strong data engineering foundation
• Strong SQL and BigQuery experience
• Hands-on with Tableau
• GCP ecosystem familiarity (mandatory)
• Ability to manage both data pipelines & visualization layers
• Prior exposure to ETL, Spark, or Python desirable
- AI/ML experience
B. Soft Skills
• Ability to collaborate with global teams (India + US)
• Strong communication for requirements gathering
• Independence in handling end‑to‑end data + BI workflows
Job Title: GCP Data Engineer
Location: Phoenix, AZ (Hybrid)
Job Type: Contract Long Term
1. Business Division & Project Summary
Business Division: Enterprise Business Intelligence (EBI) – AMEX
Position supports AMEX’s Enterprise Business Intelligence (EBI) organization, specifically the Experience Analytics (XP) team.
This division focuses on:
• Marketing analytics and campaign intelligence
• Building data pipelines, ETL processes, and insight delivery mechanisms
• Delivering insights for:
• AMEX‑initiated campaigns
• Merchant‑partner campaigns
• Campaign creative journeys & customer experience pathways
• Enabling multiple reporting/insight delivery channels:
• Dashboards (Tableau)
• API‑based insights
• Report distribution
• Audio‑layer reporting (narrative insights)
• Project Summary (New Workload Coming)
A new upcoming initiative is driving the need for hiring. The details are not confirmed yet, but likely include:
• Creation of a completely new data pipeline
• Development of new visualization/reporting layers for this pipeline
• Work within GCP ecosystem — BigQuery, Cloud Composer, DataProc, Spark, Python
1. Role Summary & Responsibilities
The requirement is a hybrid role combining:
A. Data Engineering (Primary – ~60–70%)
Key responsibilities:
• Build and maintain ETL pipelines
• Work within AMEX’s GCP ecosystem:
• BigQuery (primary analytics database)
• Cloud Composer (orchestration)
• DataProc (Spark/Python‑based ETL jobs)
• Data curation, aggregation, and transformation (SQL-heavy)
• Potential work with PySpark depending on pipeline design
• Implement AI‑powered data quality checks (future roadmap)
• Contribute to domain-specific datasets (marketing analytics)
Tech stack mentioned:
• Python
• PySpark
• SQL
• BigQuery
• GCP (Composer, DataProc)
B. Data Visualization / BI Engineering (~30–40%)
Responsibilities:
• Build dashboards for marketing analytics
• Visualize campaign journeys, insights & trends
• Use Tableau primarily (Power BI is deprioritized)
• Support multiple insight delivery formats:
• Dashboards
• API-based insight delivery
• Narrative/Audio insights
• Convert analytical output into consumable business insights
Tools:
• Tableau
• (Optional) Power BI
• APIs (for insight publishing)
1. Expectations & Role Fit Criteria
A. Core Expectations
• Strong data engineering foundation
• Strong SQL and BigQuery experience
• Hands-on with Tableau
• GCP ecosystem familiarity (mandatory)
• Ability to manage both data pipelines & visualization layers
• Prior exposure to ETL, Spark, or Python desirable
- AI/ML experience
B. Soft Skills
• Ability to collaborate with global teams (India + US)
• Strong communication for requirements gathering
• Independence in handling end‑to‑end data + BI workflows






