

Zeus Solutions Inc
GCP Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer in Houston, requiring 8+ years of data engineering experience and 4+ years in GCP. Contract duration exceeds 6 months, with a pay rate of "unknown." Key skills include Python, SQL, and data pipeline architecture.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
Unknown
-
ποΈ - Date
November 4, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Houston, TX 77027
-
π§ - Skills detailed
#BigQuery #JSON (JavaScript Object Notation) #GCP (Google Cloud Platform) #SQL (Structured Query Language) #DevOps #Data Pipeline #IAM (Identity and Access Management) #Datasets #GitHub #Storage #Dataflow #SAP #Cloud #Data Quality #REST (Representational State Transfer) #Terraform #API (Application Programming Interface) #Python #Batch #REST API #Big Data #Scala #Data Engineering
Role description
Job description
We are seeking a skilled GCP Data Engineer based in Houston (onsite), with 8+ years of overall data engineering experience and at least 4 years of hands-on expertise in Google Cloud Platform (GCP).
Responsibilities
Develop, construct, test, and maintain data acquisition pipelines for large volumes of structured and unstructured data. This includes batch and real-time processing (in Google Cloud).
Build large and complex datasets based on business requirements.
Construct βbig dataβ pipeline architecture.
Identify opportunities for data acquisition via working with stakeholders and business clients.
Translate business needs to technical requirements.
Leverage a variety of tools in the Google Cloud Ecosystem such as Python, Dataflow, DataStream, CDC (Change Data Capture), Cloud Functions, Cloud Run, Pub/Sub, BigQuery, Cloud Storage to integrate systems and data pipelines.
Use logs & alerts to effectively monitor pipelines.
Use SAP SLT to replicate SAP tables to Google Cloud using SLT.
Develop JSON messaging structure for integrating with various applications.
Leverage DevOps and CI/CD practices (GitHub, Terraform) to ensure the reliability and scalability of data pipelines.
Partition/Cluster and retrieve content in BigQuery and use IAM roles & Policy Tags to secure the data.
Use roles to secure access to datasets, authorized views to share data between projects.
Design and build an ingestion pipeline using REST API.
Recommend ways to improve data quality, reliability, and efficiency.
Required
7+ years of experience in a GCP data engineering role (if onsite/nearshore)
Atleast 2 years of hands on experience in GCP tools highlighted(Python, SQL, Data Flow, DataStream, CDC (Change Data Capture), Cloud Functions, Cloud Run, Pub Sub, BigQuery, Cloud Storage)
Great communication, work independently and manage small projects.
Job Types: Full-time, Contract, Temporary, Permanent
Work Location: In person
Job description
We are seeking a skilled GCP Data Engineer based in Houston (onsite), with 8+ years of overall data engineering experience and at least 4 years of hands-on expertise in Google Cloud Platform (GCP).
Responsibilities
Develop, construct, test, and maintain data acquisition pipelines for large volumes of structured and unstructured data. This includes batch and real-time processing (in Google Cloud).
Build large and complex datasets based on business requirements.
Construct βbig dataβ pipeline architecture.
Identify opportunities for data acquisition via working with stakeholders and business clients.
Translate business needs to technical requirements.
Leverage a variety of tools in the Google Cloud Ecosystem such as Python, Dataflow, DataStream, CDC (Change Data Capture), Cloud Functions, Cloud Run, Pub/Sub, BigQuery, Cloud Storage to integrate systems and data pipelines.
Use logs & alerts to effectively monitor pipelines.
Use SAP SLT to replicate SAP tables to Google Cloud using SLT.
Develop JSON messaging structure for integrating with various applications.
Leverage DevOps and CI/CD practices (GitHub, Terraform) to ensure the reliability and scalability of data pipelines.
Partition/Cluster and retrieve content in BigQuery and use IAM roles & Policy Tags to secure the data.
Use roles to secure access to datasets, authorized views to share data between projects.
Design and build an ingestion pipeline using REST API.
Recommend ways to improve data quality, reliability, and efficiency.
Required
7+ years of experience in a GCP data engineering role (if onsite/nearshore)
Atleast 2 years of hands on experience in GCP tools highlighted(Python, SQL, Data Flow, DataStream, CDC (Change Data Capture), Cloud Functions, Cloud Run, Pub Sub, BigQuery, Cloud Storage)
Great communication, work independently and manage small projects.
Job Types: Full-time, Contract, Temporary, Permanent
Work Location: In person






