

Torque Technologies LLC
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 12+ years of experience, focusing on Snowflake, Python, and ETL/ELT. It is a long-term remote position, requiring strong SQL, dbt Core, and OpenFlow skills, along with cloud environment experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 17, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Observability #Data Processing #Fivetran #Datasets #Informatica #Data Lineage #Macros #"ETL (Extract #Transform #Load)" #Snowflake #Data Engineering #Logging #Strategy #Cloud #Matillion #Scala #Automation #Security #GCP (Google Cloud Platform) #ML (Machine Learning) #Data Science #Data Pipeline #BI (Business Intelligence) #SQL (Structured Query Language) #AWS (Amazon Web Services) #Data Access #dbt (data build tool) #Documentation #Batch #Data Quality #Deployment #GIT #Python #Monitoring #Clustering #Data Modeling #AI (Artificial Intelligence) #Azure #Airflow #Terraform #Automated Testing
Role description
Role: Sr. Snowflake Data Engineer.
Location: Remote.
Duration: Long Term.
Job Summary: Weβre hiring a Senior Snowflake Data Engineer to build and operate reliable, scalable data pipelines and curated data products on the Snowflake Data Cloud. Our platform uses a multi-account strategy, and our primary workloads support BI and ML/AI. This is a hands-on engineering role focused on Python-driven data engineering, robust ETL/ELT, and modern transformation practices using Streams, dbt Core and OpenFlow.
Youβll partner with analytics, data science, platform, and security teams to deliver production-grade datasets with strong quality, observability, governance alignment, and performance/cost efficiency.
Key Responsibilities:
Build and maintain batch and/or near-real-time ETL/ELT pipelines landing data into Snowflake (raw β curated β consumption layers).
Develop Python data engineering components (connectors, orchestration logic, framework utilities, testing tools, and automation) supporting BI and ML use cases.
Implement transformation frameworks in dbt Core: project structure standards, modular models, macros, tests, documentation, and environment-based deployments.
Use OpenFlow to build and operationalize ingestion/flow patterns, including configuration, scheduling, troubleshooting, and performance tuning.
Design data models optimized for consumption: curated marts for BI, and ML-ready datasets/features with repeatable refresh patterns.
Apply data quality and reliability practices: automated testing, schema drift handling, idempotent loads, backfills, and reconciliation checks.
Tune Snowflake performance and cost for pipelines: warehouse sizing, clustering/partitioning strategy where appropriate, incremental processing, and query optimization.
Enable cross-account patterns aligned to the multi-account strategy (promotion between environments, sharing curated datasets, deployment consistency).
Build operational excellence: pipeline observability, alerting, runbooks, incident response participation, and root-cause analysis.
Collaborate with platform/security teams to align pipelines with governance controls (RBAC, secure data access patterns) without blocking delivery.
Required Qualifications:
12+ years of data engineering experience, including significant delivery on Snowflake in production.
Strong Python skills (clean, testable code; packaging; logging/metrics; performance-aware data processing).
Strong SQL and data modeling fundamentals (dimensional and/or domain-oriented modeling).
Hands-on experience with dbt Core (models, macros, tests, docs, deployments, CI practices).
Hands-on experience with OpenFlow (building/running flows, operational support, troubleshooting).
Proven experience designing and operating ETL/ELT pipelines (incremental loads, CDC concepts, error handling, and backfills).
Experience working in cloud environments (AWS/Azure/GCP) and production operations (monitoring, on-call/incident response, SLAs).
Comfortable working across teams (analytics, ML, platform/security) and translating requirements into deliverable datasets.
Nice to Have:
Experience supporting BI workloads (semantic-friendly marts, performance considerations, consumption patterns).
Experience supporting ML workflows (feature-ready datasets, reproducible training data, lineage and governance).
Familiarity with Snowflake governance features (masking/row access policies, secure views) and multi-account deployment patterns.
CI/CD and automation (Git workflows, build pipelines, infra-as-code such as Terraform).
Experience with common ingestion/orchestration tools (Airflow, Dagster, Prefect, etc.) or ELT tools (Fivetran/Matillion/Informatica)
Role: Sr. Snowflake Data Engineer.
Location: Remote.
Duration: Long Term.
Job Summary: Weβre hiring a Senior Snowflake Data Engineer to build and operate reliable, scalable data pipelines and curated data products on the Snowflake Data Cloud. Our platform uses a multi-account strategy, and our primary workloads support BI and ML/AI. This is a hands-on engineering role focused on Python-driven data engineering, robust ETL/ELT, and modern transformation practices using Streams, dbt Core and OpenFlow.
Youβll partner with analytics, data science, platform, and security teams to deliver production-grade datasets with strong quality, observability, governance alignment, and performance/cost efficiency.
Key Responsibilities:
Build and maintain batch and/or near-real-time ETL/ELT pipelines landing data into Snowflake (raw β curated β consumption layers).
Develop Python data engineering components (connectors, orchestration logic, framework utilities, testing tools, and automation) supporting BI and ML use cases.
Implement transformation frameworks in dbt Core: project structure standards, modular models, macros, tests, documentation, and environment-based deployments.
Use OpenFlow to build and operationalize ingestion/flow patterns, including configuration, scheduling, troubleshooting, and performance tuning.
Design data models optimized for consumption: curated marts for BI, and ML-ready datasets/features with repeatable refresh patterns.
Apply data quality and reliability practices: automated testing, schema drift handling, idempotent loads, backfills, and reconciliation checks.
Tune Snowflake performance and cost for pipelines: warehouse sizing, clustering/partitioning strategy where appropriate, incremental processing, and query optimization.
Enable cross-account patterns aligned to the multi-account strategy (promotion between environments, sharing curated datasets, deployment consistency).
Build operational excellence: pipeline observability, alerting, runbooks, incident response participation, and root-cause analysis.
Collaborate with platform/security teams to align pipelines with governance controls (RBAC, secure data access patterns) without blocking delivery.
Required Qualifications:
12+ years of data engineering experience, including significant delivery on Snowflake in production.
Strong Python skills (clean, testable code; packaging; logging/metrics; performance-aware data processing).
Strong SQL and data modeling fundamentals (dimensional and/or domain-oriented modeling).
Hands-on experience with dbt Core (models, macros, tests, docs, deployments, CI practices).
Hands-on experience with OpenFlow (building/running flows, operational support, troubleshooting).
Proven experience designing and operating ETL/ELT pipelines (incremental loads, CDC concepts, error handling, and backfills).
Experience working in cloud environments (AWS/Azure/GCP) and production operations (monitoring, on-call/incident response, SLAs).
Comfortable working across teams (analytics, ML, platform/security) and translating requirements into deliverable datasets.
Nice to Have:
Experience supporting BI workloads (semantic-friendly marts, performance considerations, consumption patterns).
Experience supporting ML workflows (feature-ready datasets, reproducible training data, lineage and governance).
Familiarity with Snowflake governance features (masking/row access policies, secure views) and multi-account deployment patterns.
CI/CD and automation (Git workflows, build pipelines, infra-as-code such as Terraform).
Experience with common ingestion/orchestration tools (Airflow, Dagster, Prefect, etc.) or ELT tools (Fivetran/Matillion/Informatica)






