Prairie Consulting Services

Lead Data Engineer (Snowflake)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer (Snowflake) with a contract length of "unknown," offering a pay rate of "unknown." Located in Downtown Chicago, it requires 5–8+ years of data engineering experience, strong Snowflake expertise, and proficiency in AWS and Azure.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date
April 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Monitoring #Cloud #Observability #Data Pipeline #Terraform #Python #ADF (Azure Data Factory) #Programming #Data Modeling #Infrastructure as Code (IaC) #AWS S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #SnowPipe #Data Warehouse #Tableau #DevOps #Data Engineering #Kubernetes #Snowflake #AWS (Amazon Web Services) #Storage #Docker #Batch #S3 (Amazon Simple Storage Service) #Azure #SQL (Structured Query Language) #Data Quality #BI (Business Intelligence) #Looker #Scala #Kafka (Apache Kafka) #Microsoft Power BI
Role description
Senior / Lead Snowflake Data Engineer (Greenfield Build, Streaming, Multi-Cloud) Location: Downtown Chicago, Hybrid, 3dys/week to work Overview We are seeking a Senior Snowflake Data Engineer with proven greenfield experience—someone who has designed and built data platforms from scratch, not just enhanced existing systems. You will play a critical role in foundational architecture, building scalable data pipelines, and establishing best practices across a modern Snowflake-based ecosystem spanning AWS and Azure. What You’ll Do • Architect and implement a new Snowflake data warehouse environment from scratch • Build end-to-end ingestion frameworks supporting: Batch/file-based ingestion (S3, Blob Storage) Real-time streaming (Snowpipe, Kafka, Event Hubs) • Develop scalable transformations using Snowflake SQL, stored procedures, and Python • Define and implement data models (dimensional/star schema) aligned with business needs • Establish data quality, monitoring, and observability frameworks • Optimize Snowflake performance and cost efficiency • Integrate with AWS (S3, Lambda, CloudWatch) and Azure (ADF, Event Hubs, Functions) • Implement CI/CD pipelines and Infrastructure as Code (Terraform/CloudFormation) • Collaborate with stakeholders to translate undefined requirements into scalable solutions • Mentor team members and help define data engineering standards Required Expertise • 5–8+ years in Data Engineering with 3+ years strong Snowflake experience • Proven track record of greenfield data platform implementations Deep expertise in: Advanced SQL & Snowflake optimization Batch + streaming data pipelines Data modeling and warehouse design • Hands-on experience with both AWS and Azure ecosystems • Strong Python programming skills • Experience delivering production-grade, scalable data systems Preferred (What Sets You Apart) • Snowflake Certifications (SnowPro Core / Advanced) • Experience building real-time streaming architectures using Kafka or similar technologies • Strong experience with Terraform or Infrastructure as Code • Familiarity with Docker, Kubernetes, and modern DevOps practices • Experience with data observability / quality tools (Monte Carlo, Great Expectations, etc.) • Exposure to BI tools (Tableau, Power BI, Looker) • Prior experience in high-scale enterprise data environments What We’re Looking For • Engineers who own problems end-to-end, not just write code • Strong systems thinking and architectural mindset • Ability to work across multi-cloud ecosystems • Someone who can optimize, scale, and modernize data platforms, not just maintain them