

Rockwoods Inc
Lead Data Engineer – AI Systems (Snowflake / Dbt / LLM)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer – AI Systems in Dallas, TX (Hybrid) on a contract basis, requiring 7+ years of experience, expertise in Python, Snowflake, SQL, and dbt, plus AI/LLM workflow experience. US Citizens only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 12, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Data Modeling #Datasets #ML (Machine Learning) #Snowflake #Airflow #Cloud #Observability #Data Engineering #Databases #AI (Artificial Intelligence) #Data Framework #Deployment #Data Architecture #Data Quality #Scala #"ETL (Extract #Transform #Load)" #dbt (data build tool) #API (Application Programming Interface) #SQL (Structured Query Language) #Python
Role description
Title: Lead Data Engineer – AI Systems (Snowflake / dbt / LLM)
Location: Dallas, TX (Hybrid)
Contract
US Citizens Only
About the Role
Rockwoods is hiring a Lead Data Engineer for a high-visibility engagement with an insurance client.
We are looking for someone who has genuinely worked on modern cloud data platforms and supported AI/LLM-driven initiatives in production environments.
This is not a traditional ETL or reporting role.
We need an engineer who understands how scalable data systems power AI applications — including LLM integrations, semantic search, vector-based retrieval, AI-ready data modeling, and production-grade pipelines.
You should be someone who:
• enjoys solving messy real-world data problems
• can build and optimize systems hands-on
• understands performance, scale, and reliability
• has worked beyond proof-of-concepts and actually deployed solutions
This is a strong opportunity for senior engineers who want ownership, technical influence, and meaningful architecture work.
Responsibilities
• Build and optimize scalable Python + Snowflake + dbt pipelines supporting analytics and AI use cases
• Design modern data architectures for LLM workflows, RAG patterns, semantic search, and AI-enabled applications
• Develop API and event-driven ingestion frameworks for structured and unstructured data
• Improve platform reliability, observability, data quality, and performance
• Prepare high-quality datasets for AI/ML inference and downstream applications
• Tune Snowflake performance and optimize transformation efficiency/costs
• Partner closely with engineering and business teams to solve operational data challenges
• Help establish scalable engineering standards and modern data platform best practices
Required Experience
• 7+ years of hands-on Data Engineering experience
• Strong expertise in Python, Snowflake, SQL, and dbt
• Experience building production-grade pipelines and modern cloud data platforms
• Experience supporting AI/LLM-related workflows in real environments
• Hands-on experience with OpenAI, Anthropic, embeddings, vector search, semantic retrieval, or RAG-style architectures
• Strong orchestration experience with Airflow or similar tools
• Experience handling imperfect enterprise-scale data
• Strong understanding of data modeling, optimization, transformation strategies, and scalability
• Ability to work independently in a fast-moving engineering environment
Strong Plus
• Insurance domain experience (Claims, Policy, Billing, Underwriting, etc.)
• Experience with vector databases or AI search architectures
• Exposure to MLOps or AI deployment workflows
• Experience designing reusable enterprise data frameworks
What This Role Is NOT
This is NOT:
• a junior ETL developer role
• a reporting/dashboard-only role
• an AI “prompt engineering” role
• a heavily bureaucratic environment with layers of approvals
We are looking for builders and problem-solvers.
Why Engineers Like This Role
• Modern cloud + AI-focused tech stack
• High ownership and technical influence
• Direct impact on real business initiatives
• Strong engineering culture
• Fast interview process
• Less process, more execution
• Opportunity to shape architecture decisions early
Important
Please apply only if you have hands-on experience with modern Data Engineering AND practical AI/LLM-related implementations in production environments.
Candidates with only reporting/dashboard backgrounds or purely academic AI exposure will likely not be a fit.
Title: Lead Data Engineer – AI Systems (Snowflake / dbt / LLM)
Location: Dallas, TX (Hybrid)
Contract
US Citizens Only
About the Role
Rockwoods is hiring a Lead Data Engineer for a high-visibility engagement with an insurance client.
We are looking for someone who has genuinely worked on modern cloud data platforms and supported AI/LLM-driven initiatives in production environments.
This is not a traditional ETL or reporting role.
We need an engineer who understands how scalable data systems power AI applications — including LLM integrations, semantic search, vector-based retrieval, AI-ready data modeling, and production-grade pipelines.
You should be someone who:
• enjoys solving messy real-world data problems
• can build and optimize systems hands-on
• understands performance, scale, and reliability
• has worked beyond proof-of-concepts and actually deployed solutions
This is a strong opportunity for senior engineers who want ownership, technical influence, and meaningful architecture work.
Responsibilities
• Build and optimize scalable Python + Snowflake + dbt pipelines supporting analytics and AI use cases
• Design modern data architectures for LLM workflows, RAG patterns, semantic search, and AI-enabled applications
• Develop API and event-driven ingestion frameworks for structured and unstructured data
• Improve platform reliability, observability, data quality, and performance
• Prepare high-quality datasets for AI/ML inference and downstream applications
• Tune Snowflake performance and optimize transformation efficiency/costs
• Partner closely with engineering and business teams to solve operational data challenges
• Help establish scalable engineering standards and modern data platform best practices
Required Experience
• 7+ years of hands-on Data Engineering experience
• Strong expertise in Python, Snowflake, SQL, and dbt
• Experience building production-grade pipelines and modern cloud data platforms
• Experience supporting AI/LLM-related workflows in real environments
• Hands-on experience with OpenAI, Anthropic, embeddings, vector search, semantic retrieval, or RAG-style architectures
• Strong orchestration experience with Airflow or similar tools
• Experience handling imperfect enterprise-scale data
• Strong understanding of data modeling, optimization, transformation strategies, and scalability
• Ability to work independently in a fast-moving engineering environment
Strong Plus
• Insurance domain experience (Claims, Policy, Billing, Underwriting, etc.)
• Experience with vector databases or AI search architectures
• Exposure to MLOps or AI deployment workflows
• Experience designing reusable enterprise data frameworks
What This Role Is NOT
This is NOT:
• a junior ETL developer role
• a reporting/dashboard-only role
• an AI “prompt engineering” role
• a heavily bureaucratic environment with layers of approvals
We are looking for builders and problem-solvers.
Why Engineers Like This Role
• Modern cloud + AI-focused tech stack
• High ownership and technical influence
• Direct impact on real business initiatives
• Strong engineering culture
• Fast interview process
• Less process, more execution
• Opportunity to shape architecture decisions early
Important
Please apply only if you have hands-on experience with modern Data Engineering AND practical AI/LLM-related implementations in production environments.
Candidates with only reporting/dashboard backgrounds or purely academic AI exposure will likely not be a fit.






