

Elios, Inc.
Lead Snowflake Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Snowflake Data Engineer on a contract basis in New York, NY, with a pay rate of "unknown." Requires 10+ years of experience, deep Snowflake expertise, strong Python and PySpark skills, and proven end-to-end pipeline design.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Agile #Leadership #Compliance #Scala #React #Consulting #Snowflake #Data Lifecycle #Storage #Data Mart #Data Governance #Data Engineering #"ETL (Extract #Transform #Load)" #Data Pipeline #Datasets #Data Processing #SQL (Structured Query Language) #Data Modeling #Spark (Apache Spark) #Clustering #Python #AWS (Amazon Web Services) #PySpark #Security #Cloud
Role description
Lead Snowflake Data Engineer (Contract)
Data Engineering | Contract with potential for extension | New York, NY | Fully Onsite
About the Role
We are partnering with a global consulting firm to bring on a Lead Snowflake Data Engineer who can design, own, and deliver data platforms end to end. You will sit at the center of pipeline design, modeling, and optimization for a modern cloud environment built on Snowflake and Cortex AI. From ingestion to consumption, the build is yours.
This is a senior, hands-on technical leadership seat. You will architect, write code, tune performance, and stand in front of clients to defend the decisions you have made. The right person here loves the platform, has shipped real production pipelines, and can explain trade-offs in plain language.
What You Will Own
• Lead the design and development of end-to-end ELT pipelines on Snowflake.
• Architect scalable data models optimized for performance, cost, and analytics consumption.
• Build and maintain backend data services in Python and PySpark.
• Leverage Snowflake Cortex AI to enable advanced analytics and intelligent data products.
• Drive performance tuning across pipelines, including query optimization, clustering, and warehouse scaling.
• Enforce best practices in data governance, security, and compliance.
• Partner across business, analytics, and engineering teams to deliver high-quality solutions.
• Provide technical leadership and mentorship to engineering teams.
• Communicate architecture decisions and trade-offs clearly in client-facing environments.
What You Bring
Must-Haves
• 10+ years of experience, or equivalent depth of ownership over production-grade data platforms.
• Deep expertise in Snowflake (data modeling, performance tuning, optimization).
• Strong Python and PySpark skills.
• Advanced SQL.
• Proven ability to design and deliver end-to-end data pipelines (ingestion, transformation, modeling, consumption) in cloud environments. AWS preferred.
• Required: ownership of at least one production-grade Snowflake pipeline, end to end.
• Strong foundation in modern data warehousing: dimensional modeling (star/snowflake schemas), ELT/ETL design patterns, data marts, and optimization strategies.
• Experience with distributed data processing and large-scale datasets.
• Hands-on experience with Snowflake Cortex AI integration.
• Working knowledge of React.js or a similar framework.
• Strong understanding of data governance, security, and compliance.
• Ability to clearly explain and defend architectural decisions, design systems that perform reliably at scale, and balance performance, cost, and maintainability.
Technical Depth You Will Be Asked About
• Snowflake Performance & Scaling: warehouse scaling modes (auto-scale, multi-cluster) and when to use them, clustering keys and performance trade-offs, cost vs performance optimization.
• Snowflake Storage & Optimization: micro-partitioning and its impact on pruning and query performance, practical optimization techniques for large datasets.
• End-to-End Pipeline Design: designing a complete ELT pipeline on Snowflake, deciding where transformations should occur (Snowflake vs external processing), ensuring scalability, maintainability, and performance across the pipeline.
About the Engagement
This is a vendor-facing role. You will need strong communication and stakeholder alignment skills, the ability to operate independently within complex enterprise environments, and a clear focus on delivering production-grade, scalable data solutions.
About the Firm
Our partner is a global consulting firm that builds technology and data platforms for the world's largest financial services and enterprise clients. Senior engineers here are trusted to lead, not just execute.
Why This Role
• Real architectural ownership across the full data lifecycle.
• Modern stack with Cortex AI in the mix, not just legacy lift-and-shift.
• Senior, technical leadership seat with mentorship responsibility.
• Onsite collaboration in NYC with a sharp Agile team.
• Contract today, with a clear runway for extension.
Lead Snowflake Data Engineer (Contract)
Data Engineering | Contract with potential for extension | New York, NY | Fully Onsite
About the Role
We are partnering with a global consulting firm to bring on a Lead Snowflake Data Engineer who can design, own, and deliver data platforms end to end. You will sit at the center of pipeline design, modeling, and optimization for a modern cloud environment built on Snowflake and Cortex AI. From ingestion to consumption, the build is yours.
This is a senior, hands-on technical leadership seat. You will architect, write code, tune performance, and stand in front of clients to defend the decisions you have made. The right person here loves the platform, has shipped real production pipelines, and can explain trade-offs in plain language.
What You Will Own
• Lead the design and development of end-to-end ELT pipelines on Snowflake.
• Architect scalable data models optimized for performance, cost, and analytics consumption.
• Build and maintain backend data services in Python and PySpark.
• Leverage Snowflake Cortex AI to enable advanced analytics and intelligent data products.
• Drive performance tuning across pipelines, including query optimization, clustering, and warehouse scaling.
• Enforce best practices in data governance, security, and compliance.
• Partner across business, analytics, and engineering teams to deliver high-quality solutions.
• Provide technical leadership and mentorship to engineering teams.
• Communicate architecture decisions and trade-offs clearly in client-facing environments.
What You Bring
Must-Haves
• 10+ years of experience, or equivalent depth of ownership over production-grade data platforms.
• Deep expertise in Snowflake (data modeling, performance tuning, optimization).
• Strong Python and PySpark skills.
• Advanced SQL.
• Proven ability to design and deliver end-to-end data pipelines (ingestion, transformation, modeling, consumption) in cloud environments. AWS preferred.
• Required: ownership of at least one production-grade Snowflake pipeline, end to end.
• Strong foundation in modern data warehousing: dimensional modeling (star/snowflake schemas), ELT/ETL design patterns, data marts, and optimization strategies.
• Experience with distributed data processing and large-scale datasets.
• Hands-on experience with Snowflake Cortex AI integration.
• Working knowledge of React.js or a similar framework.
• Strong understanding of data governance, security, and compliance.
• Ability to clearly explain and defend architectural decisions, design systems that perform reliably at scale, and balance performance, cost, and maintainability.
Technical Depth You Will Be Asked About
• Snowflake Performance & Scaling: warehouse scaling modes (auto-scale, multi-cluster) and when to use them, clustering keys and performance trade-offs, cost vs performance optimization.
• Snowflake Storage & Optimization: micro-partitioning and its impact on pruning and query performance, practical optimization techniques for large datasets.
• End-to-End Pipeline Design: designing a complete ELT pipeline on Snowflake, deciding where transformations should occur (Snowflake vs external processing), ensuring scalability, maintainability, and performance across the pipeline.
About the Engagement
This is a vendor-facing role. You will need strong communication and stakeholder alignment skills, the ability to operate independently within complex enterprise environments, and a clear focus on delivering production-grade, scalable data solutions.
About the Firm
Our partner is a global consulting firm that builds technology and data platforms for the world's largest financial services and enterprise clients. Senior engineers here are trusted to lead, not just execute.
Why This Role
• Real architectural ownership across the full data lifecycle.
• Modern stack with Cortex AI in the mix, not just legacy lift-and-shift.
• Senior, technical leadership seat with mentorship responsibility.
• Onsite collaboration in NYC with a sharp Agile team.
• Contract today, with a clear runway for extension.






