

Tror - AI for everyone
Lead Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer in Chicago, IL, on a long-term contract at a competitive pay rate. Candidates must have 10+ years of experience, strong skills in Snowflake, SQL, Python/PySpark, and real-time Data Vault implementation.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#Data Vault #PySpark #Spark (Apache Spark) #SQL (Structured Query Language) #ADLS (Azure Data Lake Storage) #Snowflake #Triggers #SnowPipe #Vault #Data Engineering #dbt (data build tool) #Data Pipeline #Python #Azure #Scala #Automation #Kafka (Apache Kafka) #Databricks #Data Modeling #AI (Artificial Intelligence) #ADF (Azure Data Factory)
Role description
Role: Lead Data Engineer
Location: Chicago, IL (Locals Only)
Experience: 10+ Years
Job Type: Contract/W2
Duration: Long Term
Interview: Video + Final In-Person Interview
What We Do NOT Want:
• Candidates without real-time Data Vault implementation experience
• Profiles with only support/maintenance work and no hands-on development
• No strong experience in Snowflake, SQL, Python/PySpark
• No experience with Kafka/EventHub streaming
• Candidates lacking Azure data platform experience (ADF, ADLS, Databricks)
• No experience in query optimization/performance tuning
• Only theoretical knowledge of DBT, Snowpipe, Tasks/Triggers
• Non-local candidates or candidates unwilling to attend in-person final interview in Chicago
• Less hands-on coding, more managerial/coordination profiles
• Candidates without CI/CD automation exposure
• Weak understanding of Snowflake architecture concepts (Virtual Warehouse vs Serverless Compute)
Required Skills:
Strong experience with:
• Snowflake / Snowpipe
• SQL
• Python / PySpark
Hands-on experience in:
• Data Engineering
• Data Modeling techniques
• Data Vault implementation (real project experience required)
Experience with:
• Kafka streaming
• Azure Event Hub
• Streaming data pipelines
Strong Azure stack knowledge:
• ADLS
• ADF
• Databricks
• CI/CD automation
Knowledge of:
• Snowflake Triggers & Tasks
• Virtual Warehouse vs Serverless Compute
• Query Optimization
• DBT and its advantages over other tools
Responsibilities:
• Lead development of Azure-based data platforms
• Build and optimize scalable data pipelines
• Work on real-time/streaming data solutions
• Improve performance and query efficiency
Looking forward to qualified local submissions only.
Thanks & Regards
Nikhila Gujjarlamudi
ngujjarlamudi@tror.ai
LinkedIn: linkedin.com/in/nikhilagujjarlamudi
Website: https://tror.ai
Address: 401 Ronan Way, Spring Hill, TN 37174
Role: Lead Data Engineer
Location: Chicago, IL (Locals Only)
Experience: 10+ Years
Job Type: Contract/W2
Duration: Long Term
Interview: Video + Final In-Person Interview
What We Do NOT Want:
• Candidates without real-time Data Vault implementation experience
• Profiles with only support/maintenance work and no hands-on development
• No strong experience in Snowflake, SQL, Python/PySpark
• No experience with Kafka/EventHub streaming
• Candidates lacking Azure data platform experience (ADF, ADLS, Databricks)
• No experience in query optimization/performance tuning
• Only theoretical knowledge of DBT, Snowpipe, Tasks/Triggers
• Non-local candidates or candidates unwilling to attend in-person final interview in Chicago
• Less hands-on coding, more managerial/coordination profiles
• Candidates without CI/CD automation exposure
• Weak understanding of Snowflake architecture concepts (Virtual Warehouse vs Serverless Compute)
Required Skills:
Strong experience with:
• Snowflake / Snowpipe
• SQL
• Python / PySpark
Hands-on experience in:
• Data Engineering
• Data Modeling techniques
• Data Vault implementation (real project experience required)
Experience with:
• Kafka streaming
• Azure Event Hub
• Streaming data pipelines
Strong Azure stack knowledge:
• ADLS
• ADF
• Databricks
• CI/CD automation
Knowledge of:
• Snowflake Triggers & Tasks
• Virtual Warehouse vs Serverless Compute
• Query Optimization
• DBT and its advantages over other tools
Responsibilities:
• Lead development of Azure-based data platforms
• Build and optimize scalable data pipelines
• Work on real-time/streaming data solutions
• Improve performance and query efficiency
Looking forward to qualified local submissions only.
Thanks & Regards
Nikhila Gujjarlamudi
ngujjarlamudi@tror.ai
LinkedIn: linkedin.com/in/nikhilagujjarlamudi
Website: https://tror.ai
Address: 401 Ronan Way, Spring Hill, TN 37174






