

Phaxis
Data Engineer (Snowflake)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Snowflake) with a contract length of unspecified duration, offering a pay rate of $100 to $130 per hour. Key skills required include SQL, Snowflake, dbt, and data pipeline development.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
1040
-
🗓️ - Date
April 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Data Integrity #Data Pipeline #"ETL (Extract #Transform #Load)" #Snowflake #SQL Queries #Data Engineering #dbt (data build tool) #Scala #Clustering #SQL (Structured Query Language)
Role description
Pay rate is $100 to $130 per hour
100% remote
• Develop, optimize, and maintain complex SQL queries to extract, transform, and analyze data efficiently
• Design and implement scalable data models using dimensional techniques such as star and snowflake schemas
• Build, monitor, and enhance ELT workflows, leveraging tools like dbt or comparable frameworks
• Tune Snowflake performance, including managing virtual warehouses, clustering, and partitioning strategies
• Collaborate with business stakeholders to translate requirements into robust, reliable data pipelines
• Ensure data integrity and quality through testing, validation, and governance best practices
• Troubleshoot and resolve issues in data pipelines, query performance, and architecture design
• Continuously identify opportunities to automate, streamline, and improve data engineering processes
Pay rate is $100 to $130 per hour
100% remote
• Develop, optimize, and maintain complex SQL queries to extract, transform, and analyze data efficiently
• Design and implement scalable data models using dimensional techniques such as star and snowflake schemas
• Build, monitor, and enhance ELT workflows, leveraging tools like dbt or comparable frameworks
• Tune Snowflake performance, including managing virtual warehouses, clustering, and partitioning strategies
• Collaborate with business stakeholders to translate requirements into robust, reliable data pipelines
• Ensure data integrity and quality through testing, validation, and governance best practices
• Troubleshoot and resolve issues in data pipelines, query performance, and architecture design
• Continuously identify opportunities to automate, streamline, and improve data engineering processes




