

Headway Tek Inc
Snowflake Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Snowflake Data Engineer in Dallas, TX, with a contract length of over 6 months and a pay rate of "unknown." Requires 10+ years of data engineering experience, 5+ years in Snowflake, and SnowPro Core Certification is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 17, 2026
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#API (Application Programming Interface) #Python #SQL (Structured Query Language) #Cloud #Matillion #XML (eXtensible Markup Language) #Azure ADLS (Azure Data Lake Storage) #AWS (Amazon Web Services) #Snowflake #Data Security #Deployment #Clustering #Snowpark #Kafka (Apache Kafka) #Collibra #Azure DevOps #Informatica #Alation #Scala #dbt (data build tool) #Data Marketplace #S3 (Amazon Simple Storage Service) #Data Catalog #Azure #Security #"ETL (Extract #Transform #Load)" #Data Governance #ADLS (Azure Data Lake Storage) #Tableau #Java #AWS S3 (Amazon Simple Storage Service) #JSON (JavaScript Object Notation) #Data Vault #Computer Science #SnowPipe #Storage #DevOps #Fivetran #BI (Business Intelligence) #Automation #Metadata #Batch #GCP (Google Cloud Platform) #Data Processing #GitHub #GIT #Data Engineering #Data Ingestion #Microsoft Power BI #Vault
Role description
Tittle : Snowflake Data Engineer β 4 Openings
Location : Dallas TX (Day1 onsite)
Full time
Key Responsibilities
β’ Design and implement Snowflake schemas (star, snowflake, data vault) optimized with micro-partitioning, clustering keys, materialized views, and search optimization services.
β’ Build real-time and batch ingestion pipelines into Snowflake using Snowpipe, Kafka Connect, Fivetran, Matillion, Informatica, or dbt.
β’ Automate incremental data processing with Streams & Tasks to support CDC (Change Data Capture).
β’ Use Zero-Copy Cloning for environment management, testing, and sandboxing.
β’ Apply Time Travel and Fail-safe features for data recovery and auditing.
β’ Develop data transformation logic in Snowpark for Python/SQL/Scala to push compute directly into Snowflake.
β’ Design integrations with cloud storage (S3, Azure ADLS, GCS) for staging and external tables.
β’ Implement data sharing and data marketplace solutions via Snowflake Secure Data Sharing and Snowflake Marketplace.
β’ Enable semi-structured data handling (JSON, Avro, Parquet, ORC, XML) using VARIANT columns and lateral flattening.
β’ Integrate Snowflake with BI tools (Power BI, Tableau) via live connections and semantic layers.
β’ Implement RBAC (Role-Based Access Control), Row Access Policies, and Dynamic Data Masking for data security.
β’ Optimize compute usage with multi-cluster warehouses, resource monitors, and query performance tuning.
β’ Manage cost optimization strategies (warehouse auto-suspend, query profiling, storage/compute separation).
β’ Integrate with data catalog & governance platforms (Collibra, Alation, Informatica CDGC) using Snowflake metadata and APIs.
β’ Work with domain teams to deliver data products leveraging Snowflakeβs data mesh-friendly features.
β’ Collaborate with architects to design a Snowflake-centric data fabric integrated with ETL/ELT and API layers.
β’ Support CI/CD automation for Snowflake code deployment using GitHub Actions, Azure DevOps, or dbt Cloud.
Qualifications
Education: Bachelorβs or Masterβs in Computer Science, Data Engineering, or related field.
Experience:
β’ 10+ years of data engineering experience, with 5+ years in Snowflake Data Cloud.
β’ Expertise in SQL optimization and Snowflake performance tuning.
β’ Hands-on with Snowpipe, Streams & Tasks, Snowpark, Zero-Copy Cloning, and Secure Data Sharing.
β’ Proficiency in Python, Scala, or Java for Snowpark development.
β’ Experience integrating with cloud platforms like AWS.
β’ Exposure to ETL/ELT tools (Informatica, Matillion, Fivetran).
β’ Familiarity with CI/CD, Git, DevOps practices for data operations.
β’ Preferred Certifications:
β’ SnowPro Core Certification
Key Skills
β’ Snowflake-native feature design and implementation (Snowpark, Streams, Time Travel, Secure Data Sharing)
β’ Data ingestion (Snowpipe, CDC, Kafka, Fivetran)
β’ Semi-structured data handling (VARIANT, JSON, Avro, Parquet)
β’ Advanced SQL and performance tuning
β’ Data governance (RBAC, masking, lineage, catalogs)
β’ Cloud data platform integrations (AWS S3, Azure ADLS, GCP GCS)
β’ BI and analytics tool integration
β’ Cost optimization and warehouse orchestration
Tittle : Snowflake Data Engineer β 4 Openings
Location : Dallas TX (Day1 onsite)
Full time
Key Responsibilities
β’ Design and implement Snowflake schemas (star, snowflake, data vault) optimized with micro-partitioning, clustering keys, materialized views, and search optimization services.
β’ Build real-time and batch ingestion pipelines into Snowflake using Snowpipe, Kafka Connect, Fivetran, Matillion, Informatica, or dbt.
β’ Automate incremental data processing with Streams & Tasks to support CDC (Change Data Capture).
β’ Use Zero-Copy Cloning for environment management, testing, and sandboxing.
β’ Apply Time Travel and Fail-safe features for data recovery and auditing.
β’ Develop data transformation logic in Snowpark for Python/SQL/Scala to push compute directly into Snowflake.
β’ Design integrations with cloud storage (S3, Azure ADLS, GCS) for staging and external tables.
β’ Implement data sharing and data marketplace solutions via Snowflake Secure Data Sharing and Snowflake Marketplace.
β’ Enable semi-structured data handling (JSON, Avro, Parquet, ORC, XML) using VARIANT columns and lateral flattening.
β’ Integrate Snowflake with BI tools (Power BI, Tableau) via live connections and semantic layers.
β’ Implement RBAC (Role-Based Access Control), Row Access Policies, and Dynamic Data Masking for data security.
β’ Optimize compute usage with multi-cluster warehouses, resource monitors, and query performance tuning.
β’ Manage cost optimization strategies (warehouse auto-suspend, query profiling, storage/compute separation).
β’ Integrate with data catalog & governance platforms (Collibra, Alation, Informatica CDGC) using Snowflake metadata and APIs.
β’ Work with domain teams to deliver data products leveraging Snowflakeβs data mesh-friendly features.
β’ Collaborate with architects to design a Snowflake-centric data fabric integrated with ETL/ELT and API layers.
β’ Support CI/CD automation for Snowflake code deployment using GitHub Actions, Azure DevOps, or dbt Cloud.
Qualifications
Education: Bachelorβs or Masterβs in Computer Science, Data Engineering, or related field.
Experience:
β’ 10+ years of data engineering experience, with 5+ years in Snowflake Data Cloud.
β’ Expertise in SQL optimization and Snowflake performance tuning.
β’ Hands-on with Snowpipe, Streams & Tasks, Snowpark, Zero-Copy Cloning, and Secure Data Sharing.
β’ Proficiency in Python, Scala, or Java for Snowpark development.
β’ Experience integrating with cloud platforms like AWS.
β’ Exposure to ETL/ELT tools (Informatica, Matillion, Fivetran).
β’ Familiarity with CI/CD, Git, DevOps practices for data operations.
β’ Preferred Certifications:
β’ SnowPro Core Certification
Key Skills
β’ Snowflake-native feature design and implementation (Snowpark, Streams, Time Travel, Secure Data Sharing)
β’ Data ingestion (Snowpipe, CDC, Kafka, Fivetran)
β’ Semi-structured data handling (VARIANT, JSON, Avro, Parquet)
β’ Advanced SQL and performance tuning
β’ Data governance (RBAC, masking, lineage, catalogs)
β’ Cloud data platform integrations (AWS S3, Azure ADLS, GCP GCS)
β’ BI and analytics tool integration
β’ Cost optimization and warehouse orchestration






