Stefanini North America and APAC

Machine Learning Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer in Dearborn, MI, with a contract length of "unknown." The pay rate is "unknown." Key skills include GCP, Big Data, AI/ML, Python, and SQL. Experience with Generative AI and LLMs is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dearborn, MI
-
🧠 - Skills detailed
#Data Warehouse #GIT #Data Processing #Observability #FastAPI #Kubernetes #Snowflake #Dataflow #Reinforcement Learning #Model Deployment #GCP (Google Cloud Platform) #Microservices #Object Detection #Statistics #Data Engineering #API (Application Programming Interface) #Apache Spark #Scala #Storage #Airflow #Classification #Data Management #Databases #Cloud #Big Data #Deep Learning #REST (Representational State Transfer) #REST API #Spark (Apache Spark) #AI (Artificial Intelligence) #Deployment #Programming #Redshift #Consulting #Agile #Version Control #Logistic Regression #Python #NLP (Natural Language Processing) #Data Manipulation #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Regression #SQL (Structured Query Language) #IAM (Identity and Access Management) #Docker #BigQuery #Data Pipeline
Role description
Details: Job Description Stefanini Group is hiring! Stefanini is looking for a Machine Learning Engineer, Dearborn, MI (Onsite) For quick apply, please reach out Saurabh Kapoor at 248-582-6559/saurabh.kapoor@stefanini.com You will be responsible for designing, building, deploying, and scaling complex self-running ML solutions - including Generative AI and Large Language Model (LLM) systems - in areas such as computer vision, perception, localization, natural language processing, and conversational AI. They automate and optimize the end-to-end ML and Gen AI model lifecycle using expertise in experimental methodologies, statistics, prompt engineering, and coding for tool building and analysis. Design and develop innovative ML models, Gen AI systems, and software algorithms - including LLM-based architectures (e.g., transformer models, RAG pipelines, fine-tuned foundation models) to solve complex business problems in both structured and unstructured environments Responsibilities • Design, build, maintain, and optimize scalable ML and Gen AI pipelines, architecture, and infrastructure, including vector databases, embedding stores, and LLM serving layers • Use machine learning and statistical modeling techniques such as decision trees, logistic regression, Bayesian analysis, and deep learning methods, alongside prompt engineering, retrieval-augmented generation (RAG), and parameter-efficient fine-tuning (PEFT/LoRA) to develop and evaluate algorithms that improve product/system performance, quality, data management, and accuracy • Adapt machine learning and Gen AI capabilities to domains such as virtual reality, augmented reality, object detection, tracking, classification, terrain mapping, intelligent document processing, and AI-powered agent workflows • Train, fine-tune, and re-train ML models and LLMs as required, including supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), and instruction tuning • Deploy ML models, LLMs, and AI agents into production; run simulations and evaluations (including LLM evals and red teaming) for algorithm development and test various scenarios • Automate model deployment, training, re-training, and Gen AI pipeline orchestration, leveraging principles of agile methodology, CI/CD/CT, MLOps, and LLMOps - including guardrail integration, prompt versioning, and observability tooling • Enable model management for model versioning, traceability, and governance - including responsible AI practices, bias evaluation, hallucination mitigation, and content safety controls - to ensure modularity and consistency across environments for both ML and Gen AI systems Job Requirements Details: Experience Required • GCP - Experience deploying and managing services on Google Cloud Platform, including Compute Engine, Cloud Storage, IAM, and Cloud Functions. For example, designing and implementing a cloud-native application architecture using GKE (Google Kubernetes Engine) with Cloud SQL and Pub/Sub. • Big Data - Experience working with large-scale data processing frameworks such as Apache Spark, Dataflow, or BigQuery. For example, building ETL pipelines that process terabytes of daily event data and transform it into downstream analytics. • Data Warehousing - Experience designing and maintaining data warehouse solutions (e.g., BigQuery, Snowflake, Redshift). For example, modeling a star schema for a retail analytics platform that supports reporting on sales, inventory, and customer behavior • Artificial Intelligence & Expert Systems - Experience developing or integrating AI/ML models and rule-based expert systems. For example, building a classification model using Vertex AI to predict customer churn, or implementing a rule engine that automates underwriting decisions. • API - Experience designing, building, and consuming RESTful or gRPC APIs. For example, developing a versioned REST API with OAuth 2.0 authentication that serves as the integration layer between a mobile application and backend microservices. Experience Preferred • Strong understanding of Generative AI principles and architectures, including Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) systems. • Proven experience in building and deploying RAG systems, including the use of Vector Databases. • Proficiency in Python programming. • Solid experience with SQL for data manipulation and querying. • Hands-on experience with Google Cloud Platform (GCP) services relevant to AI/ML. • Basic understanding and practical experience with Machine Learning model fine-tuning. • Familiarity with data engineering concepts and practices. • Expertise in prompt engineering techniques for interacting with LLMs. • Experience with the OpenAI SDK. • Experience developing robust APIs, preferably with FastAPI. • Proficiency with • • version control systems (e.g., Git). • Experience with • • containerization technologies (e.g., Docker). • Google Cloud Platform - Familiarity with advanced GCP services beyond core compute and storage, such as Vertex AI, Dataflow, Cloud Composer (Airflow), and BigQuery ML. For example, using Cloud Composer to orchestrate scheduled data pipelines that feed into a BigQuery data warehouse. • Listed salary ranges may vary based on experience, qualifications, and local market. Also, some positions may include bonuses or other incentives • • • Stefanini takes pride in hiring top talent and developing relationships with our future employees. Our talent acquisition teams will never make an offer of employment without having a phone conversation with you. Those face-to-face conversations will involve a description of the job for which you have applied. We also speak with you about the process, including interviews and job offers. About Stefanini Group The Stefanini Group is a global provider of offshore, onshore and near shore outsourcing, IT digital consulting, systems integration, application, and strategic staffing services to Fortune 1000 enterprises around the world. Our presence is in countries like the Americas, Europe, Africa, and Asia, and more than four hundred clients across a broad spectrum of markets, including financial services, manufacturing, telecommunications, chemical services, technology, public sector, and utilities. Stefanini is a CMM level 5, IT consulting company with a global presence. We are a CMM Level 5 company.