Programmers.io

GCP DATA ENGINEER

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP Data Engineer with a contract length of "Unknown", offering a pay rate of "Unknown". Key skills include GCP expertise, advanced SQL, Python, and healthcare domain knowledge. Experience with data architecture and compliance is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 16, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Nashville, TN
-
🧠 - Skills detailed
#Microsoft Power BI #Compliance #GCP (Google Cloud Platform) #Data Pipeline #GDPR (General Data Protection Regulation) #Data Architecture #Security #Apache Beam #Kafka (Apache Kafka) #Programming #Automation #Data Engineering #Looker #Data Warehouse #Docker #"ETL (Extract #Transform #Load)" #Deployment #Infrastructure as Code (IaC) #Python #Scala #Strategy #Terraform #PySpark #BigQuery #Data Ingestion #BI (Business Intelligence) #Clustering #Kubernetes #SQL (Structured Query Language) #Data Integrity #Dataflow #Spark (Apache Spark) #Batch #Leadership #DevOps #Storage #Data Modeling #IAM (Identity and Access Management) #Cloud #Data Management #Snowflake
Role description
Overview We are seeking an experienced Google Cloud Platform (GCP) Data Architect to design, build, and manage scalable, secure, and cost-optimized data solutions, aligned with reporting needs. This role involves translating business requirements into robust technical architectures, ensuring data integrity, and enabling advanced analytics through GCP services like BigQuery and Cloud Storage. The ideal candidate will lead strategy, design, and implementation efforts while collaborating with stakeholders to drive data-driven decision-making. Key Responsibilities: • Architect Scalable Data Solutions: Design and implement data warehouses, marts, lakes, and batch and/or real-time streaming pipelines using GCP-native tools. • Data Modeling & Integration: Design and Develop conformed data models (star/snowflake schemas) and ETL/ELT processes for analytics and BI tools (MicroStrategy, Looker, Power BI). • Pipeline Development: Build scalable pipelines and automate data ingestion and transformation workflows using BigQuery, Dataflow , Dataproc/PySpark, Cloud Functions, Pub/Sub, Kafka, and Cloud Composer for orchestration. • Security & Compliance: Implement IAM, encryption, and compliance standards (GDPR, HIPAA) with GCP security tools. • Performance Optimization: Apply best practices for partitioning, clustering, and BI Engine to ensure high performance and cost efficiency. • DevOps & Automation: Integrate CI/CD pipelines, IaC (Terraform), and containerization (Docker, Kubernetes) for deployment and scalability. • Collaboration & Leadership: Engage with stakeholders including leadership, Project Managers, BAs, Engineers, QA, platform teams, mentor teams, and provide technical guidance on best practices. • Troubleshooting: Resolve complex technical issues and support incident response. • Healthcare Domain Expertise: Ensure compliance with healthcare regulations and stay updated on industry trends. Required Skills & Working experience: • GCP Expertise: BigQuery, Cloud Storage, Dataflow (Apache Beam with python), Dataproc/PySpark, Cloud Functions, Pub/Sub, Kafka, Cloud Composer. • Programming: Advanced SQL and Python for analytics and pipeline development. • Performance Optimization: Experience with optimization of query performance, partitioning, clustering, and BI Engine, in BigQuery. • Automation: Experience with CI/CD for data pipelines, IaC for data services, automation of ETL/ELT processes. • Security: Strong knowledge of IAM, encryption, and compliance frameworks. • Architecture Design: Ability to create fault-tolerant, highly available, and cost-optimized solutions. • Communication: Excellent ability to convey technical concepts to both technical and non-technical stakeholders. • Domain Knowledge: Familiarity with healthcare data management and regulatory compliance.