Sibitalent Corp

Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Google Cloud Data Architect in Dallas, TX, on a long-term contract. Requires 10-14 years of data engineering experience, 5+ years on GCP, and a Google Cloud Professional Cloud Architect certification. Key skills include data lake architecture, data ingestion, and analytics.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 21, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Texas, United States
-
🧠 - Skills detailed
#Dataflow #Pig #Programming #Datasets #Scala #IAM (Identity and Access Management) #BigQuery #Hadoop #Data Management #Airflow #Cloud #Logging #Compliance #Clustering #Monitoring #Spark (Apache Spark) #Batch #Data Lake #Security #Data Quality #Python #HDFS (Hadoop Distributed File System) #BI (Business Intelligence) #Data Architecture #Metadata #Sqoop (Apache Sqoop) #SQL (Structured Query Language) #Storage #Data Governance #Data Ingestion #Migration #Apache Beam #Data Pipeline #GCP (Google Cloud Platform) #VPC (Virtual Private Cloud) #DevOps #Observability #Schema Design #"ETL (Extract #Transform #Load)" #Data Catalog #Data Processing #Data Engineering #Data Lineage #Computer Science
Role description
Title: Google Cloud Data Architect - IAM Data Modernization Location: Dallas, TX ( Hybrid Onsite) Duration: Long Term Contract Job description:- Required Skills:- 1. Data Lake Architecture & Storage ‒ Proven experience designing and implementing data lake architectures (e.g.. Bronze/Silver/Gold or layered models). ‒ Strong knowledge of Cloud Storage (GCS) design, including bucket layout, naming conventions, lifecycle policies, and access controls ‒ Experience with Hadoop/HDFS architecture, distributed file systems, and data locality principles ‒ Hands-on experience with columnar data formats (Parquet, Avro, ORC) and compression techniques ‒ Expertise in partitioning strategies, backfills, and large-scale data organization ‒ Ability to design data models optimized for analytics and Bl consumption 1. Data Ingestion & Orchestration ‒ Experience building batch and streaming ingestion pipelines using GCP-native services ‒ Knowledge of Pub/Sub-based streaming architectures, event schema design, and versioning ‒ Strong understanding of incremental ingestion and CDC patterns, including idempotency and deduplication ‒ Hands-on experience with workflow orchestration tools (Cloud Composer/Airflow) ‒ Ability to design robust error handling, replay, and backfill mechanisms 1. Data Processing & Transformation ‒ Experience developing scalable batch and streaming pipelines using Dataflow (Apache Beam) and/or Spark (Datapro6) ‒ Strong proficiency in BigQuery SQL, including query optimization, partitioning, clustering, and cost control. ‒ Hands-on experience with Hadoop MapReduce and ecosystem tools (Hive, Pig, Sqoop) ‒ Advanced Python programming skills for data engineering, including testing and maintainable code design ‒ Experience managing schema evolution while minimizing downstream impact 1. Analytics & Data Serving ‒ Expertise in BigQuery performance optimization and data serving patterns ‒ Experience building semantic layers and governed metrics for consistent analytics ‒ Familiarity with BI integration, access controls, and dashboard standards ‒ Understanding of data exposure patterns via views, APIs, or curated datasets 1. Data Governance, Quality & Metadata ‒ Experience implementing data catalogs, metadata management, and ownership models ‒ Understanding of data lineage for auditability and troubleshooting ‒ Strong focus on data quality frameworks, including validation, freshness checks, and alerting ‒ Experience defining and enforcing data contracts, schemas, and SLAs ‒ Experience denning and emorcing data contracts, schemas, and SLAS ‒ Familiarity with audit logging and compliance readiness 1. Cloud Platform Management ‒ Strong hands-on experience with Google Cloud Platform (GCP), including project setup, environment separation, billing, quotas, and cost controls ‒ Expertise in IAM and security best practices, including least-privilege access, service accounts, and role-based access ‒ Solid understanding of VPC networking, private access patterns, and secure service connectivity ‒ Experience with encryption and key management (KMS, CMEK) and security auditing 1. DevOps, Platform & Reliability ‒ Proven ability to build CI/CD pipelines for data and infrastructure workloads ‒ Experience managing secrets securely using GCP Secret Manager ‒ Ownership of observability, SLOs, dashboards, alerts, and runbooks ‒ Proficiency in logging, monitoring, and alerting for data pipelines and platform reliability Qualifications Experience: [10-14] + years in data engineering/architecture, 5+ years designing on GCP at scale; prior on-prem→ cloud migration a must. Education: Bachelor's/Master's in Computer Science, Information Systems, or equivalent experience. Certifications: Google Cloud Professional Cloud Architect (required or within 3 months). Plus: Professional Data Engineer, Security Engineer.