CloudIngest

AIML Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AIML Data Engineer in New Jersey, offering a contract length of "unknown" at a pay rate of "unknown." Key skills include Power BI, SQL, Python, and experience with Databricks and cloud platforms. On-site work required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 25, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Berkeley Heights, NJ
-
🧠 - Skills detailed
#DAX #Scala #ML (Machine Learning) #Batch #Data Engineering #Python #AWS (Amazon Web Services) #AI (Artificial Intelligence) #GCP (Google Cloud Platform) #Microsoft Power BI #Data Processing #Azure #SQL (Structured Query Language) #Data Modeling #Kafka (Apache Kafka) #Cloud #Data Pipeline #Databricks #Spark (Apache Spark) #BI (Business Intelligence) #Project Management
Role description
Hi, Send your resume to Dilip@cloudingest.com Title: AIML Data Engineer Location: New Jersey , Berkeley heights Client: Direct Role Overview We are seeking an AI/ML Data Engineer with strong expertise in Power BI and dashboard development to support internal project management and product data initiatives. The ideal candidate will combine data engineering skills with business intelligence expertise, enabling actionable insights through scalable data pipelines and intuitive dashboards. Key Responsibilities • Design, build, and optimize data pipelines to support AI/ML models and analytics. • Develop and maintain Power BI dashboards for project management and product data reporting. • Collaborate with product managers, project leads, and business stakeholders to gather requirements and translate them into data solutions. Data engineering with exposure to AI/ML workflows. • Strong hands-on expertise with Power BI (data modeling, DAX, dashboard design). • Experience with Databricks optimizations, Delta tables, and serverless architectures. • Proficiency in SQL, Python, and Spark for data processing. • Knowledge of cloud platforms (AWS, Azure, GCP) and hybrid environments. • Familiarity with Kafka, streaming ingestion, and batch pipelines. Thanks, Dilip Kumar