

Stellar Consulting Solutions, LLC
GCP Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer with a 6-month contract, offering a competitive pay rate. Requires 4-6 years of Data Engineering experience, 2 years in GCP, and strong Teradata skills. GCP certification and healthcare experience preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 16, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Hartford, CT
-
🧠 - Skills detailed
#Dataflow #GCP (Google Cloud Platform) #Monitoring #Migration #Data Storage #Data Warehouse #Data Migration #Compliance #BTEQ #AI (Artificial Intelligence) #Argo #BigQuery #Cloud #Data Pipeline #Scripting #Data Quality #Python #Kafka (Apache Kafka) #Java #Apache Kafka #DevOps #Data Engineering #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #Storage #Security #Airflow #SQL (Structured Query Language) #Data Architecture #Logging #Data Governance #GIT #Teradata
Role description
We are seeking skilled Data Engineer(s) to support a high-impact enterprise data migration initiative. The goal is to migrate data warehouse assets and ETL pipelines from Teradata to Google Cloud Platform (GCP). The role involves hands-on development, testing, and optimization of data pipelines and warehouse structures in GCP, ensuring minimal disruption and maximum performance.
Key Responsibilities:
Lead and execute migration of data and ETL workflows from Teradata to GCP-based services such as BigQuery, Cloud Storage, Dataflow, Dataproc, and Composer (Airflow).
Analyze and map existing Teradata workloads to appropriate GCP equivalents.
Rewrite SQL logic, scripts, and procedures in GCP-compliant formats (e.g., standard SQL for BigQuery).
Collaborate with data architects and business stakeholders to define migration strategies, validate data quality, and ensure compliance.
Develop automated workflows for data movement and transformation using GCP-native tools and/or custom scripts (Python).
Optimize data storage, query performance, and costs in the cloud environment.
Implement monitoring, logging, and alerting for all migration pipelines and production workloads.
Required Skills:
4 to 6+ years of experience in Data Engineering, with at least 2 years in GCP.
Strong hands-on experience in Teradata data warehousing, BTEQ, and complex SQL.
Solid knowledge of GCP services: BigQuery, Dataflow, Cloud Storage, Pub/Sub, Composer, and Dataproc.
Experience with ETL/ELT pipelines using custom scripting tools (Python/Java).
Proven ability to refactor and translate legacy logic from Teradata to GCP.
Familiarity with CI/CD, GIT, Argo CD, and DevOps practices in cloud data environments.
Strong analytical, troubleshooting, and communication skills.
Preferred Qualifications:
GCP certification (Preferred: Professional Data Engineer).
Exposure to Apache Kafka, Cloud Functions, or AI/ML pipelines on GCP.
Experience working in the healthcare domain.
Knowledge of data governance, security, and compliance in cloud ecosystems.
Behavioral Skills:
Problem solving mindset
Attention to detail
Accountability and ownership
Curious and staying current with evolving GCP services
We are seeking skilled Data Engineer(s) to support a high-impact enterprise data migration initiative. The goal is to migrate data warehouse assets and ETL pipelines from Teradata to Google Cloud Platform (GCP). The role involves hands-on development, testing, and optimization of data pipelines and warehouse structures in GCP, ensuring minimal disruption and maximum performance.
Key Responsibilities:
Lead and execute migration of data and ETL workflows from Teradata to GCP-based services such as BigQuery, Cloud Storage, Dataflow, Dataproc, and Composer (Airflow).
Analyze and map existing Teradata workloads to appropriate GCP equivalents.
Rewrite SQL logic, scripts, and procedures in GCP-compliant formats (e.g., standard SQL for BigQuery).
Collaborate with data architects and business stakeholders to define migration strategies, validate data quality, and ensure compliance.
Develop automated workflows for data movement and transformation using GCP-native tools and/or custom scripts (Python).
Optimize data storage, query performance, and costs in the cloud environment.
Implement monitoring, logging, and alerting for all migration pipelines and production workloads.
Required Skills:
4 to 6+ years of experience in Data Engineering, with at least 2 years in GCP.
Strong hands-on experience in Teradata data warehousing, BTEQ, and complex SQL.
Solid knowledge of GCP services: BigQuery, Dataflow, Cloud Storage, Pub/Sub, Composer, and Dataproc.
Experience with ETL/ELT pipelines using custom scripting tools (Python/Java).
Proven ability to refactor and translate legacy logic from Teradata to GCP.
Familiarity with CI/CD, GIT, Argo CD, and DevOps practices in cloud data environments.
Strong analytical, troubleshooting, and communication skills.
Preferred Qualifications:
GCP certification (Preferred: Professional Data Engineer).
Exposure to Apache Kafka, Cloud Functions, or AI/ML pipelines on GCP.
Experience working in the healthcare domain.
Knowledge of data governance, security, and compliance in cloud ecosystems.
Behavioral Skills:
Problem solving mindset
Attention to detail
Accountability and ownership
Curious and staying current with evolving GCP services