

Sibitalent Corp
Data Architect
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Google Cloud Data Architect in Dallas, TX, on a long-term contract. Requires 10-14 years of data engineering experience, 5+ years on GCP, and a Google Cloud Professional Cloud Architect certification. Key skills include data lake architecture, data ingestion, and analytics.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 21, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Texas, United States
-
π§ - Skills detailed
#Dataflow #Pig #Programming #Datasets #Scala #IAM (Identity and Access Management) #BigQuery #Hadoop #Data Management #Airflow #Cloud #Logging #Compliance #Clustering #Monitoring #Spark (Apache Spark) #Batch #Data Lake #Security #Data Quality #Python #HDFS (Hadoop Distributed File System) #BI (Business Intelligence) #Data Architecture #Metadata #Sqoop (Apache Sqoop) #SQL (Structured Query Language) #Storage #Data Governance #Data Ingestion #Migration #Apache Beam #Data Pipeline #GCP (Google Cloud Platform) #VPC (Virtual Private Cloud) #DevOps #Observability #Schema Design #"ETL (Extract #Transform #Load)" #Data Catalog #Data Processing #Data Engineering #Data Lineage #Computer Science
Role description
Title: Google Cloud Data Architect - IAM Data Modernization
Location: Dallas, TX ( Hybrid Onsite)
Duration: Long Term Contract
Job description:-
Required Skills:-
1. Data Lake Architecture & Storage
β’ Proven experience designing and implementing data lake architectures (e.g.. Bronze/Silver/Gold or layered models).
β’ Strong knowledge of Cloud Storage (GCS) design, including bucket layout, naming conventions, lifecycle policies, and access controls
β’ Experience with Hadoop/HDFS architecture, distributed file systems, and data locality principles
β’ Hands-on experience with columnar data formats (Parquet, Avro, ORC) and compression techniques
β’ Expertise in partitioning strategies, backfills, and large-scale data organization
β’ Ability to design data models optimized for analytics and Bl consumption
1. Data Ingestion & Orchestration
β’ Experience building batch and streaming ingestion pipelines using GCP-native services
β’ Knowledge of Pub/Sub-based streaming architectures, event schema design, and versioning
β’ Strong understanding of incremental ingestion and CDC patterns, including idempotency and deduplication
β’ Hands-on experience with workflow orchestration tools (Cloud Composer/Airflow)
β’ Ability to design robust error handling, replay, and backfill mechanisms
1. Data Processing & Transformation
β’ Experience developing scalable batch and streaming pipelines using Dataflow (Apache Beam) and/or Spark (Datapro6)
β’ Strong proficiency in BigQuery SQL, including query optimization, partitioning, clustering, and cost control.
β’ Hands-on experience with Hadoop MapReduce and ecosystem tools (Hive, Pig, Sqoop)
β’ Advanced Python programming skills for data engineering, including testing and maintainable code design
β’ Experience managing schema evolution while minimizing downstream impact
1. Analytics & Data Serving
β’ Expertise in BigQuery performance optimization and data serving patterns
β’ Experience building semantic layers and governed metrics for consistent analytics
β’ Familiarity with BI integration, access controls, and dashboard standards
β’ Understanding of data exposure patterns via views, APIs, or curated datasets
1. Data Governance, Quality & Metadata
β’ Experience implementing data catalogs, metadata management, and ownership models
β’ Understanding of data lineage for auditability and troubleshooting
β’ Strong focus on data quality frameworks, including validation, freshness checks, and alerting
β’ Experience defining and enforcing data contracts, schemas, and SLAs
β’ Experience denning and emorcing data contracts, schemas, and SLAS
β’ Familiarity with audit logging and compliance readiness
1. Cloud Platform Management
β’ Strong hands-on experience with Google Cloud Platform (GCP), including project setup, environment separation, billing, quotas, and cost controls
β’ Expertise in IAM and security best practices, including least-privilege access, service accounts, and role-based access
β’ Solid understanding of VPC networking, private access patterns, and secure service connectivity
β’ Experience with encryption and key management (KMS, CMEK) and security auditing
1. DevOps, Platform & Reliability
β’ Proven ability to build CI/CD pipelines for data and infrastructure workloads
β’ Experience managing secrets securely using GCP Secret Manager
β’ Ownership of observability, SLOs, dashboards, alerts, and runbooks
β’ Proficiency in logging, monitoring, and alerting for data pipelines and platform reliability
Qualifications
Experience: [10-14] + years in data engineering/architecture, 5+ years designing on GCP at scale; prior on-premβ cloud migration a must.
Education: Bachelor's/Master's in Computer Science, Information Systems, or equivalent experience.
Certifications: Google Cloud Professional Cloud Architect (required or within 3 months). Plus: Professional Data Engineer, Security Engineer.
Title: Google Cloud Data Architect - IAM Data Modernization
Location: Dallas, TX ( Hybrid Onsite)
Duration: Long Term Contract
Job description:-
Required Skills:-
1. Data Lake Architecture & Storage
β’ Proven experience designing and implementing data lake architectures (e.g.. Bronze/Silver/Gold or layered models).
β’ Strong knowledge of Cloud Storage (GCS) design, including bucket layout, naming conventions, lifecycle policies, and access controls
β’ Experience with Hadoop/HDFS architecture, distributed file systems, and data locality principles
β’ Hands-on experience with columnar data formats (Parquet, Avro, ORC) and compression techniques
β’ Expertise in partitioning strategies, backfills, and large-scale data organization
β’ Ability to design data models optimized for analytics and Bl consumption
1. Data Ingestion & Orchestration
β’ Experience building batch and streaming ingestion pipelines using GCP-native services
β’ Knowledge of Pub/Sub-based streaming architectures, event schema design, and versioning
β’ Strong understanding of incremental ingestion and CDC patterns, including idempotency and deduplication
β’ Hands-on experience with workflow orchestration tools (Cloud Composer/Airflow)
β’ Ability to design robust error handling, replay, and backfill mechanisms
1. Data Processing & Transformation
β’ Experience developing scalable batch and streaming pipelines using Dataflow (Apache Beam) and/or Spark (Datapro6)
β’ Strong proficiency in BigQuery SQL, including query optimization, partitioning, clustering, and cost control.
β’ Hands-on experience with Hadoop MapReduce and ecosystem tools (Hive, Pig, Sqoop)
β’ Advanced Python programming skills for data engineering, including testing and maintainable code design
β’ Experience managing schema evolution while minimizing downstream impact
1. Analytics & Data Serving
β’ Expertise in BigQuery performance optimization and data serving patterns
β’ Experience building semantic layers and governed metrics for consistent analytics
β’ Familiarity with BI integration, access controls, and dashboard standards
β’ Understanding of data exposure patterns via views, APIs, or curated datasets
1. Data Governance, Quality & Metadata
β’ Experience implementing data catalogs, metadata management, and ownership models
β’ Understanding of data lineage for auditability and troubleshooting
β’ Strong focus on data quality frameworks, including validation, freshness checks, and alerting
β’ Experience defining and enforcing data contracts, schemas, and SLAs
β’ Experience denning and emorcing data contracts, schemas, and SLAS
β’ Familiarity with audit logging and compliance readiness
1. Cloud Platform Management
β’ Strong hands-on experience with Google Cloud Platform (GCP), including project setup, environment separation, billing, quotas, and cost controls
β’ Expertise in IAM and security best practices, including least-privilege access, service accounts, and role-based access
β’ Solid understanding of VPC networking, private access patterns, and secure service connectivity
β’ Experience with encryption and key management (KMS, CMEK) and security auditing
1. DevOps, Platform & Reliability
β’ Proven ability to build CI/CD pipelines for data and infrastructure workloads
β’ Experience managing secrets securely using GCP Secret Manager
β’ Ownership of observability, SLOs, dashboards, alerts, and runbooks
β’ Proficiency in logging, monitoring, and alerting for data pipelines and platform reliability
Qualifications
Experience: [10-14] + years in data engineering/architecture, 5+ years designing on GCP at scale; prior on-premβ cloud migration a must.
Education: Bachelor's/Master's in Computer Science, Information Systems, or equivalent experience.
Certifications: Google Cloud Professional Cloud Architect (required or within 3 months). Plus: Professional Data Engineer, Security Engineer.





