

SnapCode Inc
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Pleasanton, California (hybrid) with a contract length of "unknown" and a pay rate of "unknown." Requires 8+ years in Data Engineering, 3+ years in Snowflake & dbt, and expertise in SQL, Python, and cloud platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 19, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Pleasanton, CA
-
🧠 - Skills detailed
#GIT #AWS (Amazon Web Services) #AI (Artificial Intelligence) #dbt (data build tool) #Observability #SQL (Structured Query Language) #Programming #Leadership #Data Architecture #Cloud #DevOps #Storage #Strategy #Macros #Data Quality #GCP (Google Cloud Platform) #Automation #GitHub #Azure #Airflow #Fivetran #API (Application Programming Interface) #SnowPipe #Metadata #Data Modeling #Data Integration #Monitoring #NiFi (Apache NiFi) #Snowflake #Security #Scala #Schema Design #Documentation #"ETL (Extract #Transform #Load)" #Data Engineering #Python
Role description
Title: Senior Data Engineer
Location: Pleasanton, California (hybrid work)
Role Overview
As a Senior/Lead Data Engineer, you will lead the design, development, and ownership of core data infrastructurefrom pipelines to storage to data products. You'll be a strategic partner across teams, ensuring that our data systems are robust, scalable, and optimized for performance. With executive visibility and deep cross-functional collaboration, the solutions you build will directly influence product strategy and operational excellence.
This is a unique opportunity to build from the ground up while working with cutting-edge technologies such as Postgres SQL, dbt, Snowflake, and modern orchestration frameworks.
Key Responsibilities
• Architect, design, and implement scalable ELT pipelines using Snowflake, dbt, and Postgres.
• Optimize data models in both Snowflake (cloud DW) and Postgres (transactional/operational data).
• Implement advanced Snowflake features (Snowpipe, Streams, Tasks, Dynamic Tables, RBAC, Security).
• Design and maintain hybrid pipelines (Postgres � Snowflake) for seamless data integration.
• Establish data quality and testing frameworks using dbt tests and metadata-driven validation.
• Implement CI/CD workflows (Git, GitHub Actions, or similar) for dbt/Snowflake/Postgres projects.
• Drive observability, monitoring, and performance tuning of pipelines (logs, lineage, metrics).
• Provide technical leadership and mentorship to engineers and analysts.
• Collaborate with Finance, Product, Marketing, and GTM teams to deliver trusted, business-critical data models.
• Support financial data processes (consolidation, reconciliation, close automation).
• Evaluate and experiment with emerging AI and data technologies, providing feedback to influence product direction.
Requirements
• Experience: 8+ years in Data Engineering, including 3+ years in Snowflake & dbt.
• Database Expertise:
• Deep hands-on experience with dbt (Core/Cloud) macros, testing, documentation, and packages.
• Strong expertise in Postgres (schema design, optimization, stored procedures, large-scale workloads).
• Advanced knowledge of Snowflake (data modeling, performance tuning, governance).
• Programming: Proficient in SQL and Python, including API integrations and automation.
• Orchestration & ETL: Hands-on with Airflow, Dagster, Prefect (or similar), and ETL/ELT tools like Fivetran, Nifi.
• Data Architecture: Strong understanding of data warehousing, dimensional modeling, medallion architecture, and system design principles.
• Cloud: Experience with AWS (mandatory); GCP or Azure is a plus.
• DevOps: Experience with Git/GitOps CI/CD pipelines for data workflows.
• Leadership: Proven ability to mentor teams, collaborate cross-functionally, and deliver impact in fast-paced environments.
• Communication: Excellent written and verbal communication skills.
Title: Senior Data Engineer
Location: Pleasanton, California (hybrid work)
Role Overview
As a Senior/Lead Data Engineer, you will lead the design, development, and ownership of core data infrastructurefrom pipelines to storage to data products. You'll be a strategic partner across teams, ensuring that our data systems are robust, scalable, and optimized for performance. With executive visibility and deep cross-functional collaboration, the solutions you build will directly influence product strategy and operational excellence.
This is a unique opportunity to build from the ground up while working with cutting-edge technologies such as Postgres SQL, dbt, Snowflake, and modern orchestration frameworks.
Key Responsibilities
• Architect, design, and implement scalable ELT pipelines using Snowflake, dbt, and Postgres.
• Optimize data models in both Snowflake (cloud DW) and Postgres (transactional/operational data).
• Implement advanced Snowflake features (Snowpipe, Streams, Tasks, Dynamic Tables, RBAC, Security).
• Design and maintain hybrid pipelines (Postgres � Snowflake) for seamless data integration.
• Establish data quality and testing frameworks using dbt tests and metadata-driven validation.
• Implement CI/CD workflows (Git, GitHub Actions, or similar) for dbt/Snowflake/Postgres projects.
• Drive observability, monitoring, and performance tuning of pipelines (logs, lineage, metrics).
• Provide technical leadership and mentorship to engineers and analysts.
• Collaborate with Finance, Product, Marketing, and GTM teams to deliver trusted, business-critical data models.
• Support financial data processes (consolidation, reconciliation, close automation).
• Evaluate and experiment with emerging AI and data technologies, providing feedback to influence product direction.
Requirements
• Experience: 8+ years in Data Engineering, including 3+ years in Snowflake & dbt.
• Database Expertise:
• Deep hands-on experience with dbt (Core/Cloud) macros, testing, documentation, and packages.
• Strong expertise in Postgres (schema design, optimization, stored procedures, large-scale workloads).
• Advanced knowledge of Snowflake (data modeling, performance tuning, governance).
• Programming: Proficient in SQL and Python, including API integrations and automation.
• Orchestration & ETL: Hands-on with Airflow, Dagster, Prefect (or similar), and ETL/ELT tools like Fivetran, Nifi.
• Data Architecture: Strong understanding of data warehousing, dimensional modeling, medallion architecture, and system design principles.
• Cloud: Experience with AWS (mandatory); GCP or Azure is a plus.
• DevOps: Experience with Git/GitOps CI/CD pipelines for data workflows.
• Leadership: Proven ability to mentor teams, collaborate cross-functionally, and deliver impact in fast-paced environments.
• Communication: Excellent written and verbal communication skills.