CloudIngest

AI/ML Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI/ML Data Engineer in Berkeley Heights, NJ, for a 6-12 month contract at $60/hr. Requires 12+ years of experience, expertise in Power BI, SQL, Python, and cloud platforms, with a focus on data engineering and AI/ML workflows.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
April 25, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Berkeley Heights, NJ
-
🧠 - Skills detailed
#DAX #Scala #ML (Machine Learning) #Batch #Data Engineering #Python #AWS (Amazon Web Services) #AI (Artificial Intelligence) #GCP (Google Cloud Platform) #Microsoft Power BI #Data Processing #Azure #SQL (Structured Query Language) #Data Modeling #Kafka (Apache Kafka) #Cloud #Data Pipeline #Databricks #Spark (Apache Spark) #BI (Business Intelligence) #Project Management
Role description
Job Title: AI/ML Data Engineer Location: Berkeley Heights, NJ (On-Site) Duration: 6-12 month contract with multiple extensions Rate: $60/hr on W2 (No C2C) Minimum Eaperience: 12+ Years Role Overview We are seeking an AI/ML Data Engineer with strong expertise in Power BI and dashboard development to support internal project management and product data initiatives. The ideal candidate will combine data engineering skills with business intelligence expertise, enabling actionable insights through scalable data pipelines and intuitive dashboards. Key Responsibilities • Design, build, and optimize data pipelines to support AI/ML models and analytics. • Develop and maintain Power BI dashboards for project management and product data reporting. • Collaborate with product managers, project leads, and business stakeholders to gather requirements and translate them into data solutions. Data engineering with exposure to AI/ML workflows. • Strong hands-on expertise with Power BI (data modeling, DAX, dashboard design). • Experience with Databricks optimizations, Delta tables, and serverless architectures. • Proficiency in SQL, Python, and Spark for data processing. • Knowledge of cloud platforms (AWS, Azure, GCP) and hybrid environments. • Familiarity with Kafka, streaming ingestion, and batch pipelines.