

Insight Global
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Data Engineer on a 6+ month contract, paying competitive rates. Required skills include 5+ years in data engineering, strong SQL and Python, cloud data platforms (Snowflake preferred), and experience in quick service restaurants.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date
April 14, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Miami, FL
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Data Lineage #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Data Engineering #Snowflake #Cloud #Airflow #Monitoring #Data Ingestion #Metadata #AWS Glue #Scala #AWS (Amazon Web Services) #Python #Batch
Role description
Job Title: AI Data Engineer
Duration: 6+ month contract
Interview Process: 2 rounds
Must Haves:
• 5+ years of experience in data engineering
• Strong SQL and Python
• Experience with cloud data platforms (Snowflake preferred) and modern data stack
• Experience with Agentic development tools such as Codex, Cursor, or Claude code
• Experience in quick service restaurants, or similar retail industries (open to hospitality)
• Experience with Airflow and AWS Glue
Plusses:
• Experience integrating vendor systems (labor, finance, inventory)
• Experience with semantic layers and metadata tools
• Experience enabling data for AI/LLM use cases
Job Description:
Insight Global is seeking an AI Data Engineer for a top global Quick Service Restaurant (QSR) enterprise. This role sits at the intersection of data engineering and applied AI, focused on building reliable, scalable data foundations that power analytics, semantic consistency, and AI-driven decision support. The engineer will own ingestion and transformation pipelines for POS and back-office systems, ensuring high-quality, standardized data flows into the enterprise platform to enable trusted insights and intelligent AI use cases at global scale.
Day-to-Day:
• Design and build data ingestion pipelines for global POS systems
• Integrate data from back-office vendor platforms (labor, inventory, finance)
• Support multiple ingestion patterns (batch, streaming, APIs)
• Normalize and standardize data across vendors, regions, and brands
• Ensure data freshness, completeness, and consistency
• Build scalable ETL/ELT pipelines for real-time and batch workloads
• Optimize pipelines for performance, reliability, and cost
• Support semantic layer enablement by modeling standardized KPIs
• Capture metadata, lineage, and schema details to support AI systems
• Implement data validation, monitoring, and schema versioning
• Partner with analytics and AI teams to improve downstream insights
Job Title: AI Data Engineer
Duration: 6+ month contract
Interview Process: 2 rounds
Must Haves:
• 5+ years of experience in data engineering
• Strong SQL and Python
• Experience with cloud data platforms (Snowflake preferred) and modern data stack
• Experience with Agentic development tools such as Codex, Cursor, or Claude code
• Experience in quick service restaurants, or similar retail industries (open to hospitality)
• Experience with Airflow and AWS Glue
Plusses:
• Experience integrating vendor systems (labor, finance, inventory)
• Experience with semantic layers and metadata tools
• Experience enabling data for AI/LLM use cases
Job Description:
Insight Global is seeking an AI Data Engineer for a top global Quick Service Restaurant (QSR) enterprise. This role sits at the intersection of data engineering and applied AI, focused on building reliable, scalable data foundations that power analytics, semantic consistency, and AI-driven decision support. The engineer will own ingestion and transformation pipelines for POS and back-office systems, ensuring high-quality, standardized data flows into the enterprise platform to enable trusted insights and intelligent AI use cases at global scale.
Day-to-Day:
• Design and build data ingestion pipelines for global POS systems
• Integrate data from back-office vendor platforms (labor, inventory, finance)
• Support multiple ingestion patterns (batch, streaming, APIs)
• Normalize and standardize data across vendors, regions, and brands
• Ensure data freshness, completeness, and consistency
• Build scalable ETL/ELT pipelines for real-time and batch workloads
• Optimize pipelines for performance, reliability, and cost
• Support semantic layer enablement by modeling standardized KPIs
• Capture metadata, lineage, and schema details to support AI systems
• Implement data validation, monitoring, and schema versioning
• Partner with analytics and AI teams to improve downstream insights





