

SnapCode Inc
Senior Data Engineer (Snowflake & Observability Implementation)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer focused on Snowflake and Data Observability, lasting 6 months in LATAM, with a pay rate of "XX". Key skills include Snowflake DMFs, dbt, Splunk integration, and incident management with OpsGenie.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
240
-
🗓️ - Date
January 29, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
California, United States
-
🧠 - Skills detailed
#Logging #Storage #Workiva #Observability #SQL (Structured Query Language) #BI (Business Intelligence) #Splunk #Data Governance #Data Engineering #Data Quality #Documentation #Terraform #Data Pipeline #dbt (data build tool) #Snowflake #Anomaly Detection #Data Modeling #Monitoring #Metadata #Scala #Streamlit #JSON (JavaScript Object Notation) #Version Control
Role description
Client: Workiva
Job Title: Senior Data Engineer - Data Observability (Snowflake, dbt, DMF)
Location: LATAM
Duration: 6months
Role Overview
We are looking for a highly skilled Senior Data Engineer with strong experience in Data Observability to help operationalize and scale a robust data reliability framework. This role will focus on implementing end-to-end observability across dbt, Snowflake Data Metric Functions (DMFs), Splunk, and OpsGenie, ensuring proactive detection and resolution of data quality issues across critical data assets.
The mission is to establish a self-healing, highly visible data reliability layer that eliminates silent data failures and enables faster incident response.
Key Responsibilities 2. Data Quality & Observability Framework 3. DMF Thresholding & Performance Optimization 4. Observability & Alerting (Splunk Integration) 5. Incident Management & SLA Enforcement (OpsGenie) 6. Reporting & Executive Visibility
Unified Metadata Collection & PersistenceStandardize and automate dbt metadata capture across all model runs.Build and maintain a hardened dbt-to-Snowflake logging pipeline to persist run\_results.json and manifest.json into an Observability schema.Implement automated cleanup and retention policies to manage storage efficiently.Apply data observability rules at scale by pushing dbt checks into Snowflake DMFs.Implement and manage observability rules across key dimensions:
• Validity (data types, formats)
• Freshness (timeliness, latency)
• Volume (record count reconciliation)
• Schema & Values (structural and value changes)
• Distribution (anomaly detection, trend deviations)
Ensure data quality monitoring for high-priority and Tier-1 tables.Design and implement targeted Snowflake DMFs instead of blanket monitoring.Define dynamic thresholds (e.g., standard deviation-based) to reduce alert fatigue.Analyze and optimize DMF credit consumption, keeping monitoring costs within 5-10% of total compute.Build a single pane of glass for data observability using Splunk.Create high-performance alerts correlating dbt job failures with DMF violations.Ensure alerts include contextual payloads such as:Failing dbt model or code linkTable ownerDownstream BI impactIntegrate Splunk alerts with OpsGenie for actionable incident management.Configure:
• Priority-based alert routing (Warning vs Critical)
• Auto-resolution of alerts when issues self-heal
• SLA tracking for MTTD (Mean Time to Detect) and MTTR (Mean Time to Resolve)
• Build a Data Reliability Executive Dashboard (Splunk or Snowflake/Streamlit) to provide:
• Overall Data Health Score
• Volume and Freshness trends
• Top offending models/tables
• Month-over-month MTTD and MTTR improvements
Operational Standards & Documentation
• Create detailed Runbooks / SOPs for on-call engineers.
• Implement Monitoring-as-Code using version control (Terraform, dbt project files).
• Maintain a weekly Observability Health Dashboard to identify noisy or unstable models.
Required Skills & Experience
• Strong hands-on experience with Snowflake, especially Data Metric Functions (DMFs)
• Advanced experience with dbt (metadata, testing, orchestration)
• Proven experience integrating observability tools like Splunk
• Experience with incident management platforms (OpsGenie preferred)
• Strong SQL and data modeling skills
• Experience building scalable, production-grade data pipelines
• Familiarity with cost optimization and performance tuning in Snowflake
Nice to Have
• Experience with Streamlit dashboards
• Infrastructure-as-Code (Terraform)
• Background in data governance or data reliability engineering
• Experience supporting on-call or production data platforms
Client: Workiva
Job Title: Senior Data Engineer - Data Observability (Snowflake, dbt, DMF)
Location: LATAM
Duration: 6months
Role Overview
We are looking for a highly skilled Senior Data Engineer with strong experience in Data Observability to help operationalize and scale a robust data reliability framework. This role will focus on implementing end-to-end observability across dbt, Snowflake Data Metric Functions (DMFs), Splunk, and OpsGenie, ensuring proactive detection and resolution of data quality issues across critical data assets.
The mission is to establish a self-healing, highly visible data reliability layer that eliminates silent data failures and enables faster incident response.
Key Responsibilities 2. Data Quality & Observability Framework 3. DMF Thresholding & Performance Optimization 4. Observability & Alerting (Splunk Integration) 5. Incident Management & SLA Enforcement (OpsGenie) 6. Reporting & Executive Visibility
Unified Metadata Collection & PersistenceStandardize and automate dbt metadata capture across all model runs.Build and maintain a hardened dbt-to-Snowflake logging pipeline to persist run\_results.json and manifest.json into an Observability schema.Implement automated cleanup and retention policies to manage storage efficiently.Apply data observability rules at scale by pushing dbt checks into Snowflake DMFs.Implement and manage observability rules across key dimensions:
• Validity (data types, formats)
• Freshness (timeliness, latency)
• Volume (record count reconciliation)
• Schema & Values (structural and value changes)
• Distribution (anomaly detection, trend deviations)
Ensure data quality monitoring for high-priority and Tier-1 tables.Design and implement targeted Snowflake DMFs instead of blanket monitoring.Define dynamic thresholds (e.g., standard deviation-based) to reduce alert fatigue.Analyze and optimize DMF credit consumption, keeping monitoring costs within 5-10% of total compute.Build a single pane of glass for data observability using Splunk.Create high-performance alerts correlating dbt job failures with DMF violations.Ensure alerts include contextual payloads such as:Failing dbt model or code linkTable ownerDownstream BI impactIntegrate Splunk alerts with OpsGenie for actionable incident management.Configure:
• Priority-based alert routing (Warning vs Critical)
• Auto-resolution of alerts when issues self-heal
• SLA tracking for MTTD (Mean Time to Detect) and MTTR (Mean Time to Resolve)
• Build a Data Reliability Executive Dashboard (Splunk or Snowflake/Streamlit) to provide:
• Overall Data Health Score
• Volume and Freshness trends
• Top offending models/tables
• Month-over-month MTTD and MTTR improvements
Operational Standards & Documentation
• Create detailed Runbooks / SOPs for on-call engineers.
• Implement Monitoring-as-Code using version control (Terraform, dbt project files).
• Maintain a weekly Observability Health Dashboard to identify noisy or unstable models.
Required Skills & Experience
• Strong hands-on experience with Snowflake, especially Data Metric Functions (DMFs)
• Advanced experience with dbt (metadata, testing, orchestration)
• Proven experience integrating observability tools like Splunk
• Experience with incident management platforms (OpsGenie preferred)
• Strong SQL and data modeling skills
• Experience building scalable, production-grade data pipelines
• Familiarity with cost optimization and performance tuning in Snowflake
Nice to Have
• Experience with Streamlit dashboards
• Infrastructure-as-Code (Terraform)
• Background in data governance or data reliability engineering
• Experience supporting on-call or production data platforms






