

Qualis1 Inc.
GCP BigQuery Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP BigQuery Data Engineer in Santa Clara, CA, for a 6+ month contract with a pay rate of "unknown." Key skills include GCP BigQuery, Python, SQL, and experience with Airflow or Cloud Composer. A Bachelor’s degree in a related field is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
November 1, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Santa Clara County, CA
-
🧠 - Skills detailed
#Data Governance #Data Science #Data Modeling #Scala #"ETL (Extract #Transform #Load)" #Snowflake #BigQuery #Data Lineage #GCP (Google Cloud Platform) #Storage #Airflow #DevOps #Security #Data Management #Deployment #Data Engineering #Programming #Data Pipeline #Version Control #Computer Science #Cloud #GitHub #Python #Metadata #SQL (Structured Query Language) #SQL Queries #Data Architecture #Data Lake
Role description
Job Title: GCP BigQuery Data Engineer
Location: Santa Clara, CA (100 % Onsite)
Duration: 6 + Months (Contract)
Position Overview
We’re seeking a highly skilled GCP BigQuery Data Engineer to design, optimize, and scale data pipelines in a petabyte-scale analytics environment. This role demands deep expertise in Google BigQuery, Python, and SQL, plus strong experience with Airflow / Cloud Composer / Dagster for orchestration and performance tuning.
You’ll work with cross-functional data teams to model data, automate workflows, and enhance performance in complex cloud environments.
Key Responsibilities
• Design and implement efficient data pipelines and ETL/ELT processes in GCP (BigQuery, Data Lake, Storage Buckets).
• Develop and optimize data models for large-scale analytical workloads.
• Write, tune, and maintain complex SQL queries and Python scripts for data transformation.
• Perform performance tuning and query optimization within petabyte-scale BigQuery environments.
• Build and manage workflow orchestration using Airflow, Cloud Composer, or Dagster.
• Collaborate with analysts, data scientists, and stakeholders to align data architecture with business goals.
• Apply CI/CD best practices and use GitHub for version control and deployment.
• Monitor, troubleshoot, and enhance pipeline reliability and scalability.
Required Skills & Experience
• 4 – 7 years in data engineering or analytics.
• Strong proficiency in GCP BigQuery and SQL (advanced query optimization).
• Experience in Python programming and data modeling.
• Solid knowledge of ETL tools and pipeline orchestration frameworks (Airflow, Cloud Composer, Dagster).
• Experience with performance tuning, data standardization, and metric governance in large-scale systems.
• Familiarity with CI/CD pipelines and GitHub workflows.
• Understanding of Snowflake or similar cloud-data warehousing tools is a plus.
Preferred Skills
• Experience building pipelines across multi-cloud environments.
• Exposure to Data Governance / Security frameworks (RBAC, data lineage, metadata management).
• Hands-on experience with containerization and DevOps concepts.
Education
• Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related discipline.
Job Title: GCP BigQuery Data Engineer
Location: Santa Clara, CA (100 % Onsite)
Duration: 6 + Months (Contract)
Position Overview
We’re seeking a highly skilled GCP BigQuery Data Engineer to design, optimize, and scale data pipelines in a petabyte-scale analytics environment. This role demands deep expertise in Google BigQuery, Python, and SQL, plus strong experience with Airflow / Cloud Composer / Dagster for orchestration and performance tuning.
You’ll work with cross-functional data teams to model data, automate workflows, and enhance performance in complex cloud environments.
Key Responsibilities
• Design and implement efficient data pipelines and ETL/ELT processes in GCP (BigQuery, Data Lake, Storage Buckets).
• Develop and optimize data models for large-scale analytical workloads.
• Write, tune, and maintain complex SQL queries and Python scripts for data transformation.
• Perform performance tuning and query optimization within petabyte-scale BigQuery environments.
• Build and manage workflow orchestration using Airflow, Cloud Composer, or Dagster.
• Collaborate with analysts, data scientists, and stakeholders to align data architecture with business goals.
• Apply CI/CD best practices and use GitHub for version control and deployment.
• Monitor, troubleshoot, and enhance pipeline reliability and scalability.
Required Skills & Experience
• 4 – 7 years in data engineering or analytics.
• Strong proficiency in GCP BigQuery and SQL (advanced query optimization).
• Experience in Python programming and data modeling.
• Solid knowledge of ETL tools and pipeline orchestration frameworks (Airflow, Cloud Composer, Dagster).
• Experience with performance tuning, data standardization, and metric governance in large-scale systems.
• Familiarity with CI/CD pipelines and GitHub workflows.
• Understanding of Snowflake or similar cloud-data warehousing tools is a plus.
Preferred Skills
• Experience building pipelines across multi-cloud environments.
• Exposure to Data Governance / Security frameworks (RBAC, data lineage, metadata management).
• Hands-on experience with containerization and DevOps concepts.
Education
• Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related discipline.






