GCP Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer on a contract basis, offering a competitive pay rate. Key skills include Google Cloud, Python, BigQuery, and DevOps practices. Experience with data pipelines, SAP, and JSON integration is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 19, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Houston, TX
-
🧠 - Skills detailed
#Data Pipeline #GitHub #JSON (JavaScript Object Notation) #Data Quality #BigQuery #Data Engineering #GCP (Google Cloud Platform) #API (Application Programming Interface) #REST (Representational State Transfer) #Storage #REST API #Batch #Datasets #Scala #IAM (Identity and Access Management) #SAP #DevOps #Cloud #Python #Big Data #Terraform
Role description
Job description: β€’ Develop, construct, test and maintain data acquisition pipelines for large volumes of structed and unstructured data. This includes batch and real-time processing (in google cloud) β€’ Build large and complex datasets based on business requirements β€’ Construct β€˜big data’ pipeline architecture β€’ Identify opportunities for data acquisition via working with stakeholders and business clients β€’ Translate business needs to technical requirements β€’ Leverage a variety of tools in the Google Cloud Ecosystem such as Python, Data Flow, DataStream, CDC (Change Data Capture), Cloud Functions, Cloud Run, Pub Sub, BigQuery, Cloud Storage to integrate systems and data pipelines β€’ Use logs & alerts to effectively monitor pipelines β€’ Use SAP SLT to replicate SAP tables to Google Cloud using SLT β€’ Develop JSON messaging structure for integrating with various applications β€’ Leverage DevOps and CI/CD practices (GitHub, Terraform) to ensure the reliability and scalability of data pipelines β€’ Partition/Cluster and retrieve content in Big Query and use IAM roles & Policy Tags to secure the data β€’ Use roles to secure access to datasets, authorized views to share data between projects β€’ Design and build an ingestion pipeline using Rest API β€’ Recommends ways to improve data quality, reliability, and efficiency