

HireTalent - Diversity Staffing & Recruiting Firm
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a focus on Azure and AI/ML, offering a contract length of "unknown" and a pay rate of "unknown." Key skills required include Azure Data Factory, Databricks, Python, SQL, and experience in AI/ML pipeline development.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Los Angeles, CA
-
🧠 - Skills detailed
#GitHub #AI (Artificial Intelligence) #Azure Data Factory #Data Pipeline #Spark (Apache Spark) #Azure #Data Governance #PySpark #Compliance #Terraform #Python #SQL (Structured Query Language) #ML (Machine Learning) #Apache Spark #MLflow #Scala #ADF (Azure Data Factory) #Data Architecture #Databricks #Delta Lake #Data Engineering
Role description
Senior Data Engineer | Azure & AI/ML Focus
Seeking an experienced Senior Data Engineer to architect next-generation data solutions supporting analytics and AI/ML initiatives.
What You'll Do:
• Design scalable data pipelines using Azure Data Factory & Databricks
• Build Lakehouse solutions with Spark/PySpark technologies
• Implement APIs and enterprise integration platforms
• Support Generative AI use cases with MLflow & Delta Lake
• Lead data governance and compliance initiatives
• Automate infrastructure with Terraform & GitHub CI/CD
Must-Have Requirements:
• Expert-level Azure Data Factory & Databricks experience
• Strong Python, SQL, and Apache Spark skills
• Hands-on Terraform and GitHub experience
• Enterprise data architecture background
• AI/ML pipeline development experience
Senior Data Engineer | Azure & AI/ML Focus
Seeking an experienced Senior Data Engineer to architect next-generation data solutions supporting analytics and AI/ML initiatives.
What You'll Do:
• Design scalable data pipelines using Azure Data Factory & Databricks
• Build Lakehouse solutions with Spark/PySpark technologies
• Implement APIs and enterprise integration platforms
• Support Generative AI use cases with MLflow & Delta Lake
• Lead data governance and compliance initiatives
• Automate infrastructure with Terraform & GitHub CI/CD
Must-Have Requirements:
• Expert-level Azure Data Factory & Databricks experience
• Strong Python, SQL, and Apache Spark skills
• Hands-on Terraform and GitHub experience
• Enterprise data architecture background
• AI/ML pipeline development experience





