

CriticalRiver Inc.
Senior Data Engineer - Snowflake Coding
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Pleasanton, California, offering a contract of more than 6 months at $80.00 - $90.00 per hour. Requires 8+ years in Data Engineering, 3+ years with Snowflake & dbt, and expertise in SQL and Python.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
720
-
๐๏ธ - Date
October 30, 2025
๐ - Duration
More than 6 months
-
๐๏ธ - Location
Hybrid
-
๐ - Contract
Unknown
-
๐ - Security
Unknown
-
๐ - Location detailed
Pleasanton, CA
-
๐ง - Skills detailed
#Computer Science #Cloud #Leadership #Python #GCP (Google Cloud Platform) #GitHub #Observability #Scala #"ETL (Extract #Transform #Load)" #Big Data #Snowflake #Automation #Security #Storage #Apache Iceberg #Monitoring #DevOps #Fivetran #AI (Artificial Intelligence) #Data Engineering #GIT #Azure #Documentation #Datasets #Hadoop #Programming #SaaS (Software as a Service) #Airflow #Data Framework #Schema Design #Strategy #SnowPipe #Data Integration #dbt (data build tool) #Data Quality #AWS (Amazon Web Services) #Macros #Metadata #Data Architecture #NiFi (Apache NiFi) #Data Storage #Java #Spark (Apache Spark) #SQL (Structured Query Language) #API (Application Programming Interface) #Data Modeling
Role description
We are currently looking for Senior Data Engineer, below is the job description for the position, please review and let me know if you would like to apply.
For any reason if you are not available, please pass along the opportunity to you know people, colleagues or friends who are looking for job opportunities.
Title: Senior Data Engineer
Location: Pleasanton, California (hybrid work)
Contract fulltime Role Overview
As a Senior Data Engineer at CR, you will lead the design, development, and ownership of core data infrastructureโfrom pipelines to storage to data products. Youโll be a strategic partner across teams, ensuring that our data systems are robust, scalable, and optimized for performance. With executive visibility and deep cross-functional collaboration, the solutions you build will directly influence product strategy and operational excellence.
This is a unique opportunity to build from the ground up while working with cutting-edge technologies such as Postgres SQL, dbt, Snowflake, and modern orchestration frameworks.
Key Responsibilities
Architect, design, and implement scalable ELT pipelines using Snowflake, dbt, and Postgres.
Optimize data models in both Snowflake (cloud DW) and Postgres (transactional/operational data).
Implement advanced Snowflake features (Snowpipe, Streams, Tasks, Dynamic Tables, RBAC, Security).
Design and maintain hybrid pipelines (Postgres โ Snowflake) for seamless data integration.
Establish data quality and testing frameworks using dbt tests and metadata-driven validation.
Implement CI/CD workflows (Git, GitHub Actions, or similar) for dbt/Snowflake/Postgres projects.
Drive observability, monitoring, and performance tuning of pipelines (logs, lineage, metrics).
Provide technical leadership and mentorship to engineers and analysts.
Collaborate with Finance, Product, Marketing, and GTM teams to deliver trusted, business-critical data models.
Support financial data processes (consolidation, reconciliation, close automation).
Evaluate and experiment with emerging AI and data technologies, providing feedback to influence product direction.
Requirements
Education: Bachelorโs or Masterโs degree in Computer Science, Engineering, or related field.
Experience: 8+ years in Data Engineering, including 3+ years in Snowflake & dbt.
Database Expertise:
Deep hands-on experience with dbt (Core/Cloud) โ macros, testing, documentation, and packages.
Strong expertise in Postgres (schema design, optimization, stored procedures, large-scale workloads).
Advanced knowledge of Snowflake (data modeling, performance tuning, governance).
Programming: Proficient in SQL and Python, including API integrations and automation.
Orchestration & ETL: Hands-on with Airflow, Dagster, Prefect (or similar), and ETL/ELT tools like Fivetran, Nifi.
Data Architecture: Strong understanding of data warehousing, dimensional modeling, medallion architecture, and system design principles.
Cloud: Experience with AWS (mandatory); GCP or Azure is a plus.
DevOps: Experience with Git/GitOps CI/CD pipelines for data workflows.
Leadership: Proven ability to mentor teams, collaborate cross-functionally, and deliver impact in fast-paced environments.
Communication: Excellent written and verbal communication skills.
Nice-to-Have (Preferred Qualifications)
Experience with finance/accounting datasets
Prior experience in SaaS or high-growth technology environments.
Familiarity with big data frameworks (Spark, Hadoop, Flink).
Experience with open table formats (e.g., Apache Iceberg) and multi-cloud data storage.
Skills & Technologies
Languages: SQL, Python, Scala/Java/Rust
Platforms: Postgres, Snowflake, dbt
Orchestration: Airflow, Dagster, Prefect
Data Practices: Data modeling, ETL/ELT pipelines, data architecture, observability
Domains: Data engineering, analytics, infrastructure, product data systems
Job Types: Full-time, Contract
Pay: $80.00 - $90.00 per hour
Expected hours: 40 per week
Application Question(s):
How many years experience you have with DBT
Location:
Pleasanton, CA (Required)
Ability to Commute:
Pleasanton, CA (Required)
Work Location: In person
We are currently looking for Senior Data Engineer, below is the job description for the position, please review and let me know if you would like to apply.
For any reason if you are not available, please pass along the opportunity to you know people, colleagues or friends who are looking for job opportunities.
Title: Senior Data Engineer
Location: Pleasanton, California (hybrid work)
Contract fulltime Role Overview
As a Senior Data Engineer at CR, you will lead the design, development, and ownership of core data infrastructureโfrom pipelines to storage to data products. Youโll be a strategic partner across teams, ensuring that our data systems are robust, scalable, and optimized for performance. With executive visibility and deep cross-functional collaboration, the solutions you build will directly influence product strategy and operational excellence.
This is a unique opportunity to build from the ground up while working with cutting-edge technologies such as Postgres SQL, dbt, Snowflake, and modern orchestration frameworks.
Key Responsibilities
Architect, design, and implement scalable ELT pipelines using Snowflake, dbt, and Postgres.
Optimize data models in both Snowflake (cloud DW) and Postgres (transactional/operational data).
Implement advanced Snowflake features (Snowpipe, Streams, Tasks, Dynamic Tables, RBAC, Security).
Design and maintain hybrid pipelines (Postgres โ Snowflake) for seamless data integration.
Establish data quality and testing frameworks using dbt tests and metadata-driven validation.
Implement CI/CD workflows (Git, GitHub Actions, or similar) for dbt/Snowflake/Postgres projects.
Drive observability, monitoring, and performance tuning of pipelines (logs, lineage, metrics).
Provide technical leadership and mentorship to engineers and analysts.
Collaborate with Finance, Product, Marketing, and GTM teams to deliver trusted, business-critical data models.
Support financial data processes (consolidation, reconciliation, close automation).
Evaluate and experiment with emerging AI and data technologies, providing feedback to influence product direction.
Requirements
Education: Bachelorโs or Masterโs degree in Computer Science, Engineering, or related field.
Experience: 8+ years in Data Engineering, including 3+ years in Snowflake & dbt.
Database Expertise:
Deep hands-on experience with dbt (Core/Cloud) โ macros, testing, documentation, and packages.
Strong expertise in Postgres (schema design, optimization, stored procedures, large-scale workloads).
Advanced knowledge of Snowflake (data modeling, performance tuning, governance).
Programming: Proficient in SQL and Python, including API integrations and automation.
Orchestration & ETL: Hands-on with Airflow, Dagster, Prefect (or similar), and ETL/ELT tools like Fivetran, Nifi.
Data Architecture: Strong understanding of data warehousing, dimensional modeling, medallion architecture, and system design principles.
Cloud: Experience with AWS (mandatory); GCP or Azure is a plus.
DevOps: Experience with Git/GitOps CI/CD pipelines for data workflows.
Leadership: Proven ability to mentor teams, collaborate cross-functionally, and deliver impact in fast-paced environments.
Communication: Excellent written and verbal communication skills.
Nice-to-Have (Preferred Qualifications)
Experience with finance/accounting datasets
Prior experience in SaaS or high-growth technology environments.
Familiarity with big data frameworks (Spark, Hadoop, Flink).
Experience with open table formats (e.g., Apache Iceberg) and multi-cloud data storage.
Skills & Technologies
Languages: SQL, Python, Scala/Java/Rust
Platforms: Postgres, Snowflake, dbt
Orchestration: Airflow, Dagster, Prefect
Data Practices: Data modeling, ETL/ELT pipelines, data architecture, observability
Domains: Data engineering, analytics, infrastructure, product data systems
Job Types: Full-time, Contract
Pay: $80.00 - $90.00 per hour
Expected hours: 40 per week
Application Question(s):
How many years experience you have with DBT
Location:
Pleasanton, CA (Required)
Ability to Commute:
Pleasanton, CA (Required)
Work Location: In person





