IPolarity

Data Engineer with AI/ML

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with AI/ML expertise, requiring 5+ years in data engineering and 2+ years in production ML. Contract length is full-time W2, with strong skills in SQL, Kafka, Databricks, and data governance.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 26, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Tampa, FL
-
🧠 - Skills detailed
#BI (Business Intelligence) #MongoDB #Security #SQL (Structured Query Language) #Data Engineering #Data Pipeline #Compliance #Computer Science #Monitoring #NiFi (Apache NiFi) #Data Accuracy #Apache NiFi #Deployment #Statistics #Data Architecture #Data Processing #Kafka (Apache Kafka) #Data Integration #AI (Artificial Intelligence) #Databricks #ML (Machine Learning) #Data Security #OpenSearch #Data Governance
Role description
Title: Data engineer with AI/ML Work Authorization : GC/USC Job Type : Full - Time (Strictly W2 Only - No C2C allowed) One of our client seeking a Data Engineer with strong AI/ML expertise to modernize and scale our Business Intelligence (BI) capabilities. This role will design and build data pipelines, deploy machine learning solutions, and operationalize intelligent analytics to drive decision-making across the organization. The ideal candidate blends data engineering best practices with applied machine learning, MLOps, and AI. Key Responsibilities Project Responsibility: End-to-end data pipelines and integrations Technical Competencies: Advanced SQL optimization and complex query design Kafka streaming applications and connector development Databricks workflow development with medallion architecture Data governance implementation and compliance Performance tuning for large-scale data processing Data security and privacy best practices Apache NiFi pipeline development for invoice and PO processing Integration with purpose-built data stores (Druid, MongoDB, OpenSearch, Postgres) Build and maintain end-to-end ML pipelines for training, deployment, and monitoring of models. Design and optimize data architectures for large-scale ML workloads Explore and implement LLM-based solutions, RAG architectures, and generative AI for business use cases. Soft Skills: Cross-functional collaboration with product and engineering teams Technical mentoring for junior data engineers Analytical thinking for complex data problems Stakeholder communication for data requirements Process improvement and efficiency focus Quality mindset for data accuracy and reliability Vendor Management: Direct communication with data platform vendors Evaluates vendor tools for specific data use cases Provides technical feedback on vendor product roadmaps Coordinates with vendors for data integration projects Qualifications: Bachelor’s/Master’s in Computer Science, Data Engineering, Statistics, or related field. 5+ years in data engineering; 2+ years applying ML in production. E- Mail: sandhya.m@ipolarityllc.com Linkedin:linkedin.com/in/sandhya-s-895330288