

Euclid Innovations
Data Engineer - GCP
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer - GCP for a contract length of "unknown" with a pay rate of "unknown." Key skills include Apache Spark, Python, and GCP experience. Strong ETL, data migration, and big data handling are required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 23, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Scala #Spark (Apache Spark) #PySpark #GCP (Google Cloud Platform) #Data Quality #Data Migration #Big Data #Dataflow #BigQuery #Data Engineering #Data Ingestion #Data Processing #Data Pipeline #Cloud #"ETL (Extract #Transform #Load)" #Data Science #Apache Spark #Data Integration #Google Cloud Storage #Python #Storage #Datasets #Batch #Migration
Role description
Key Responsibilities
• Design and build scalable ETL/data pipelines using Spark and Python
• Develop data workflows to ingest, transform, and move large datasets
• Implement data routing logic to direct data to:
GCP (BigQuery, Dataflow, Dataproc)
On-prem platforms (DPC)
• Ensure data quality, validation, and reconciliation across systems
• Collaborate with data science and platform teams to support predictive model pipelines
• Optimize performance and scalability for high-volume data processing
Required Skills
• Strong hands-on experience with Apache Spark / PySpark for large-scale data processing
• Proficiency in Python for data engineering (ETL pipelines)
• Experience designing and developing data pipelines / data engineering workflows
• Solid background in ETL, data ingestion, transformation, and data movement
• Experience working with big data technologies and handling large datasets (batch/streaming)
• Experience with cloud platforms – GCP (Google Cloud Platform)
o BigQuery, Dataflow, Dataproc, GCS (Google Cloud Storage)
• Experience with data migration / data integration projects
• Understanding of data pipeline architecture and distributed systems
Key Responsibilities
• Design and build scalable ETL/data pipelines using Spark and Python
• Develop data workflows to ingest, transform, and move large datasets
• Implement data routing logic to direct data to:
GCP (BigQuery, Dataflow, Dataproc)
On-prem platforms (DPC)
• Ensure data quality, validation, and reconciliation across systems
• Collaborate with data science and platform teams to support predictive model pipelines
• Optimize performance and scalability for high-volume data processing
Required Skills
• Strong hands-on experience with Apache Spark / PySpark for large-scale data processing
• Proficiency in Python for data engineering (ETL pipelines)
• Experience designing and developing data pipelines / data engineering workflows
• Solid background in ETL, data ingestion, transformation, and data movement
• Experience working with big data technologies and handling large datasets (batch/streaming)
• Experience with cloud platforms – GCP (Google Cloud Platform)
o BigQuery, Dataflow, Dataproc, GCS (Google Cloud Storage)
• Experience with data migration / data integration projects
• Understanding of data pipeline architecture and distributed systems






