

Euclid Innovations
GCP Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer in Charlotte, NC, for 12 months at a competitive pay rate. Key skills include Apache Spark, Python, and GCP experience. Financial services experience is preferred. On-site work is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
April 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Data Migration #GCP (Google Cloud Platform) #Cloud #Data Ingestion #Google Cloud Storage #Data Pipeline #Spark (Apache Spark) #Apache Spark #Python #"ETL (Extract #Transform #Load)" #Data Integration #Data Engineering #PySpark #Big Data #Datasets #ML (Machine Learning) #Data Science #Dataflow #Storage #BigQuery #Migration #Batch #Data Processing #Data Quality #Scala
Role description
Hi Rahul,
Hope you are doing good!
This is Rahul from Euclid Innovations. Please find the JD and let me know if you are interested.
GCP Data Engineer
Charlotte, NC
12 Months
We are seeking experienced Data Engineers to support a large-scale data platform transformation for a leading banking client.
This role focuses on building Spark-based data pipelines and enabling data movement between GCP (Google Cloud Platform) and on-prem systems (DPC) based on governance and model requirements.
Key Responsibilities
• Design and build scalable ETL/data pipelines using Spark and Python
• Develop data workflows to ingest, transform, and move large datasets
• Implement data routing logic to direct data to:
• GCP (BigQuery, Dataflow, Dataproc)
• On-prem platforms (DPC)
• Ensure data quality, validation, and reconciliation across systems
• Collaborate with data science and platform teams to support predictive model pipelines
• Optimize performance and scalability for high-volume data processing
Required Skills
• Strong hands-on experience with Apache Spark / PySpark for large-scale data processing
• Proficiency in Python for data engineering (ETL pipelines)
• Experience designing and developing data pipelines / data engineering workflows
• Solid background in ETL, data ingestion, transformation, and data movement
• Experience working with big data technologies and handling large datasets (batch/streaming)
• Experience with cloud platforms – GCP (Google Cloud Platform)
• BigQuery, Dataflow, Dataproc, GCS (Google Cloud Storage)
• Experience with data migration / data integration projects
• Understanding of data pipeline architecture and distributed systems
Preferred Skills
• Experience with GCP (BigQuery, Dataflow, Dataproc, GCS)
• Exposure to hybrid environments (cloud + on-prem)
• Familiarity with ML/data pipelines (supporting models, not building them)
• Experience in financial services / banking domain (nice to have)
Hi Rahul,
Hope you are doing good!
This is Rahul from Euclid Innovations. Please find the JD and let me know if you are interested.
GCP Data Engineer
Charlotte, NC
12 Months
We are seeking experienced Data Engineers to support a large-scale data platform transformation for a leading banking client.
This role focuses on building Spark-based data pipelines and enabling data movement between GCP (Google Cloud Platform) and on-prem systems (DPC) based on governance and model requirements.
Key Responsibilities
• Design and build scalable ETL/data pipelines using Spark and Python
• Develop data workflows to ingest, transform, and move large datasets
• Implement data routing logic to direct data to:
• GCP (BigQuery, Dataflow, Dataproc)
• On-prem platforms (DPC)
• Ensure data quality, validation, and reconciliation across systems
• Collaborate with data science and platform teams to support predictive model pipelines
• Optimize performance and scalability for high-volume data processing
Required Skills
• Strong hands-on experience with Apache Spark / PySpark for large-scale data processing
• Proficiency in Python for data engineering (ETL pipelines)
• Experience designing and developing data pipelines / data engineering workflows
• Solid background in ETL, data ingestion, transformation, and data movement
• Experience working with big data technologies and handling large datasets (batch/streaming)
• Experience with cloud platforms – GCP (Google Cloud Platform)
• BigQuery, Dataflow, Dataproc, GCS (Google Cloud Storage)
• Experience with data migration / data integration projects
• Understanding of data pipeline architecture and distributed systems
Preferred Skills
• Experience with GCP (BigQuery, Dataflow, Dataproc, GCS)
• Exposure to hybrid environments (cloud + on-prem)
• Familiarity with ML/data pipelines (supporting models, not building them)
• Experience in financial services / banking domain (nice to have)






