NAVA Software Solutions

AI Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Data Engineer with 6+ years of experience, specializing in Databricks and MuleSoft, to design scalable data pipelines remotely. Proficiency in Python/Scala and Azure cloud environments is required, along with AI/ML pipeline experience. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Apache Spark #Cloud #Azure #Data Ingestion #Spark (Apache Spark) #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Engineering #Scala #Security #Data Quality #Data Governance #Azure cloud #Databricks #AI (Artificial Intelligence) #ML (Machine Learning) #Datasets #Delta Lake #Python
Role description
Job title : AI Data Engineer (Databricks / MuleSoft) Location: Remote (US) Job Summary We are looking for a highly skilled AI Data Engineer with expertise in Databricks and MuleSoft, to design and implement scalable data pipelines and AI-driven solutions using Claude Code (Opus 4.6) within Azure AI Foundry. Key Responsibilities • Design and build data pipelines using Databricks (Spark, Delta Lake) • Integrate systems and APIs using MuleSoft • Work on data ingestion, transformation, and orchestration • Enable AI/ML workflows by preparing high-quality datasets • Collaborate with engineering teams leveraging AI-assisted development tools • Ensure data governance, security, and performance optimization Required Skills • 6+ years of experience in Data Engineering • Strong expertise in Databricks, Apache Spark, and Delta Lake • Experience with MuleSoft integration platform • Proficiency in Python/Scala • Experience working in Azure cloud environments Preferred Qualifications • Experience with AI/ML pipelines and LLM integration • Familiarity with Azure AI Foundry / OpenAI ecosystem • Knowledge of data governance and data quality frameworks