

Odiin.AI
AI Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Data Engineer on a "contract length" basis, offering a pay rate of "$X/hour". Key skills include Python, SQL, and experience with cloud platforms. Familiarity with big data frameworks and ML workflows is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 25, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Datasets #Programming #ML (Machine Learning) #Data Storage #SQL (Structured Query Language) #Azure #AWS (Amazon Web Services) #Data Science #Cloud #Storage #Data Pipeline #Scala #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Python #Data Engineering #AI (Artificial Intelligence) #GCP (Google Cloud Platform) #Hadoop #Data Framework #Big Data
Role description
You’ll work closely with AI engineers and data scientists to ensure data is accessible, clean, and ready for model training and inference.
Responsibilities:
• Build, manage, and optimise ETL pipelines for structured and unstructured data.
• Collect, process, and clean large datasets for AI/ML model training.
• Collaborate with AI/ML teams to support data requirements for research and production.
• Implement data storage, retrieval, and processing best practices.
• Monitor, troubleshoot, and improve data pipeline performance and reliability.
• Document data workflows, architecture, and pipeline processes.
Requirements:
• Strong programming skills (Python, SQL, and/or Scala).
• Experience with cloud platforms (AWS, GCP, Azure) and data storage solutions.
• Knowledge of big data frameworks (Spark, Hadoop, or similar).
• Familiarity with ML workflows and AI data requirements.
• Strong problem-solving and communication skills.
You’ll work closely with AI engineers and data scientists to ensure data is accessible, clean, and ready for model training and inference.
Responsibilities:
• Build, manage, and optimise ETL pipelines for structured and unstructured data.
• Collect, process, and clean large datasets for AI/ML model training.
• Collaborate with AI/ML teams to support data requirements for research and production.
• Implement data storage, retrieval, and processing best practices.
• Monitor, troubleshoot, and improve data pipeline performance and reliability.
• Document data workflows, architecture, and pipeline processes.
Requirements:
• Strong programming skills (Python, SQL, and/or Scala).
• Experience with cloud platforms (AWS, GCP, Azure) and data storage solutions.
• Knowledge of big data frameworks (Spark, Hadoop, or similar).
• Familiarity with ML workflows and AI data requirements.
• Strong problem-solving and communication skills.






