

OKAYA INFOCOM
Local Candidate--Senior Data Engineer (GCP, BigQuery, Dbt, Lakehouse)--Iselin, New Jersey--Contract on W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Iselin, New Jersey, on a 3-6 month W2 contract. Key skills include GCP, BigQuery, dbt, and Python. Requires 5+ years of data engineering experience and expertise in data modeling and pipeline orchestration.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date
May 7, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Iselin, NJ
-
🧠 - Skills detailed
#Macros #Metadata #Clustering #Datasets #Cloud #"ETL (Extract #Transform #Load)" #Python #BigQuery #Security #SQL (Structured Query Language) #dbt (data build tool) #GCP (Google Cloud Platform) #GitHub #Data Access #Data Engineering #Airflow #Data Pipeline #Scala #Data Modeling #IAM (Identity and Access Management) #Monitoring
Role description
Title: Senior Data Engineer (GCP, BigQuery, dbt, Lakehouse)
Location: Iselin, New Jersey
Duration: 3-6 months contract to hire
Job Description:
Technical skills and proficiency requirements
• 5+ years of data engineering experience with strong expertise in GCP (BigQuery, GCS) and modern data stack tools
• Advanced hands-on experience with dbt Core, including incremental models, snapshots, macros, testing, and semantic layer development
• Strong BigQuery SQL expertise, including window functions, complex CTEs, SCD modeling, and query/cost optimization techniques
• Experience building and managing data pipelines and orchestration workflows using Python and tools like Prefect or Cloud Composer (Airflow)
• Solid understanding of data platform engineering, including BigQuery administration (partitioning, clustering), BigLake, IAM security, and CI/CD for data workflows (GitHub Actions or Cloud Build)
Day to Day
• Design and develop scalable data models using dbt Core, including incremental models, snapshots, macros, and testing frameworks
• Build and optimize BigQuery SQL transformations, leveraging advanced techniques such as window functions, complex CTEs, and SCD patterns
• Ensure cost-efficient query performance through partition pruning, clustering strategies, and query optimization
• Manage and configure BigQuery datasets, including partitioning, clustering, materialized views, and external tables
• Develop and maintain data pipelines and orchestration workflows using tools like Prefect or Cloud Composer
• Implement event-driven pipelines (e.g., GCS-triggered workflows) with proper retry logic, monitoring, and alerting
• Build and maintain data validation frameworks using Python and tools like Great Expectations
• Configure and manage BigLake external tables over GCS (Parquet/Iceberg), including metadata caching and partition management
• Implement secure data access controls using GCP IAM, including service accounts and authorized views
• Collaborate with cross-functional teams to ensure high-quality, reliable data delivery
DETAILS NEEDED
Years of experience in the following:
• GCP Google Cloud Platform
• Dbt (models, snapshots, macros, semantic layer)
• Bigquery administration
• Bigquery functions/performance/partitioning
• Biglake / Lakehouse architecture
• Apache
• Pipeline orchestration
• Python
• Basic Cloud security
• Data modeling (Bronze, Silver, Gold)
Title: Senior Data Engineer (GCP, BigQuery, dbt, Lakehouse)
Location: Iselin, New Jersey
Duration: 3-6 months contract to hire
Job Description:
Technical skills and proficiency requirements
• 5+ years of data engineering experience with strong expertise in GCP (BigQuery, GCS) and modern data stack tools
• Advanced hands-on experience with dbt Core, including incremental models, snapshots, macros, testing, and semantic layer development
• Strong BigQuery SQL expertise, including window functions, complex CTEs, SCD modeling, and query/cost optimization techniques
• Experience building and managing data pipelines and orchestration workflows using Python and tools like Prefect or Cloud Composer (Airflow)
• Solid understanding of data platform engineering, including BigQuery administration (partitioning, clustering), BigLake, IAM security, and CI/CD for data workflows (GitHub Actions or Cloud Build)
Day to Day
• Design and develop scalable data models using dbt Core, including incremental models, snapshots, macros, and testing frameworks
• Build and optimize BigQuery SQL transformations, leveraging advanced techniques such as window functions, complex CTEs, and SCD patterns
• Ensure cost-efficient query performance through partition pruning, clustering strategies, and query optimization
• Manage and configure BigQuery datasets, including partitioning, clustering, materialized views, and external tables
• Develop and maintain data pipelines and orchestration workflows using tools like Prefect or Cloud Composer
• Implement event-driven pipelines (e.g., GCS-triggered workflows) with proper retry logic, monitoring, and alerting
• Build and maintain data validation frameworks using Python and tools like Great Expectations
• Configure and manage BigLake external tables over GCS (Parquet/Iceberg), including metadata caching and partition management
• Implement secure data access controls using GCP IAM, including service accounts and authorized views
• Collaborate with cross-functional teams to ensure high-quality, reliable data delivery
DETAILS NEEDED
Years of experience in the following:
• GCP Google Cloud Platform
• Dbt (models, snapshots, macros, semantic layer)
• Bigquery administration
• Bigquery functions/performance/partitioning
• Biglake / Lakehouse architecture
• Apache
• Pipeline orchestration
• Python
• Basic Cloud security
• Data modeling (Bronze, Silver, Gold)






