

Stefanini North America and APAC
Machine Learning Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer in Dearborn, MI, with a contract length of unspecified duration and a pay rate of "unknown." Key requirements include a Master's degree or equivalent experience, 4 years in data engineering, and proficiency in Google Cloud Platform, Python, and TensorFlow.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 12, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dearborn, MI
-
🧠 - Skills detailed
#Microsoft Azure #Database Management #GitHub #Java #Migration #Storage #Monitoring #Telematics #REST API #Data Modeling #ML (Machine Learning) #Data Governance #Batch #Data Processing #GIT #Synapse #Jira #Terraform #Data Pipeline #Apache Kafka #Microservices #BigQuery #Redshift #Data Science #SonarQube #Security #Airflow #Amazon Redshift #Agile #Docker #Cloud #Database Design #Data Lineage #GCP (Google Cloud Platform) #PostgreSQL #Spark (Apache Spark) #REST (Representational State Transfer) #Data Engineering #Azure #Azure Synapse Analytics #MySQL #AI (Artificial Intelligence) #Deployment #DevOps #Data Architecture #Kafka (Apache Kafka) #TensorFlow #Scala #Computer Science #Consulting #SQL Server #Data Mapping #RDBMS (Relational Database Management System) #Continuous Deployment #Data Mining #Data Warehouse #Project Management #SQL (Structured Query Language) #Python #Automation
Role description
Details:
Job Description
Stefanini Group is hiring!
Stefanini is looking for a Machine Learning Engineer (Dearborn, MI)
For quick apply, please reach out to Adil Khan at 248-728- 6424/ adil.khan@stefanini.com
We are seeking a Machine Learning who can build scalable and robust ML data pipelines in the cloud to process large volumes of connected vehicle data to support agentic initiatives. Build scalable and robust ML data pipelines in the cloud to process large volumes of connected vehicle data to support agentic initiatives.
Responsibilities
• Optimize existing ML solutions for performance, security, and cost-effectiveness
• Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles.
• Build data pipelines to monitoring quality of data and performance of analytical models and agentic solutions.
• Maintain the infrastructure of the data platform using terraform and continuously develop, evaluate, and deliver code using CI/CD.
• Collaborate with data analytics stakeholders to streamline the data acquisition, processing, and presentation process.
• Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards.
• Enhance and maintain the DevOps capabilities of the data platform.
• Continuously optimize and enhance existing data solutions (pipelines, products, infrastructure) for best performance, high security, low vulnerability, low costs, and high reliability.
• Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration and continuous deployment (CI/CD).
• Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle.
• Perform any necessary data mapping, data lineage activities and document information flows.
• Monitor the production pipelines and provide production support by addressing production issues as per SLAs.
• Provide analysis of connected vehicle data to support new product developments and production vehicle improvements.
• Continuously enhance your domain knowledge of connected vehicle data, connected services and algorithms/models/solutions developed by data scientists and AI engineers.
Job Requirements
Details:
Skills Required
• Technical Communication, Communications, Google Cloud Platform, TensorFlow, Data Governance, Machine Learning, Python, Artificial Intelligence & Expert Systems, GitHub, Tekton, Docker, Jira, Microservices, Data Architecture, Agile Software Development, SQL, Java, Spark, Cloud Architecture, Apache Kafka, REST APIs
Skills Preferred
• Telematics, Machine Learning, Data Modeling, Cloud Infrastructure, Data Mining, Database Design, Troubleshooting (Problem Solving), Labor Supervision
Experience Required
• Master's degree or foreign equivalent degree in Computer Science, Software Engineering, Information Systems, Data Engineering, or a related field, and 4 years of experience OR equivalent combination of education and experience (6+ years with Bachelor's Degree).
• 4 years of professional experience in: Data engineering, data product development and software product launches
• At least three of the following languages: Java, Python, Spark, Scala, SQL
• 3 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using: Data warehouses like Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery.
• Workflow orchestration tools like Airflow.
• Relational Database Management System like MySQL, PostgreSQL, and SQL Server.
• Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub
• Microservices architecture to deliver large-scale real-time data processing application.
• REST APIs for compute, storage, operations, and security.
• DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker.
• Project management tools like Atlassian JIRA.
Experience Preferred
• Ph.D. or foreign equivalent degree in Computer Science, Software Engineering, Information System, Data Engineering, or a related field.
• 2 years of experience with ML Model Development and/or MLOps.
• Committed code to improve open-source data/software engineering projects
• Experience architecting cloud infrastructure and handling application migrations/upgrades.
• GCP Professional Certifications.
• Demonstrated passion to mine raw data and realize its hidden value.
• Passion to experiment/implement state of the art data engineering methods/techniques.
• Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment.
• Experience implementing methods for automation of all parts of the pipeline to minimize labor in development and production.
• Analytics skills to profile data, troubleshoot data pipeline/product issues.
• Ability to simplify, clearly communicate complex data/software ideas/problems and work with cross-functional teams and all levels of management independently.
• Ability to mentor and advise junior team members
Education Required
• Bachelor's Degree
Education Preferred
• Master's Degree
• Listed salary ranges may vary based on experience, qualifications, and local market. Also, some positions may include bonuses or other incentives
•
•
• Stefanini takes pride in hiring top talent and developing relationships with our future employees. Our talent acquisition teams will never make an offer of employment without having a phone conversation with you. Those face-to-face conversations will involve a description of the job for which you have applied. We will also speak with you about the process, including interviews and job offers.
About Stefanini Group
The Stefanini Group is a global provider of offshore, onshore and near shore outsourcing, IT digital consulting, systems integration, application, and strategic staffing services to Fortune 1000 enterprises around the world. Our presence is in countries like the Americas, Europe, Africa, and Asia, and more than four hundred clients across a broad spectrum of markets, including financial services, manufacturing, telecommunications, chemical services, technology, public sector, and utilities. Stefanini is a CMM level 5, IT consulting company with a global presence. We are a CMM Level 5 company.
Details:
Job Description
Stefanini Group is hiring!
Stefanini is looking for a Machine Learning Engineer (Dearborn, MI)
For quick apply, please reach out to Adil Khan at 248-728- 6424/ adil.khan@stefanini.com
We are seeking a Machine Learning who can build scalable and robust ML data pipelines in the cloud to process large volumes of connected vehicle data to support agentic initiatives. Build scalable and robust ML data pipelines in the cloud to process large volumes of connected vehicle data to support agentic initiatives.
Responsibilities
• Optimize existing ML solutions for performance, security, and cost-effectiveness
• Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles.
• Build data pipelines to monitoring quality of data and performance of analytical models and agentic solutions.
• Maintain the infrastructure of the data platform using terraform and continuously develop, evaluate, and deliver code using CI/CD.
• Collaborate with data analytics stakeholders to streamline the data acquisition, processing, and presentation process.
• Implement an enterprise data governance model and actively promote the concept of data - protection, sharing, reuse, quality, and standards.
• Enhance and maintain the DevOps capabilities of the data platform.
• Continuously optimize and enhance existing data solutions (pipelines, products, infrastructure) for best performance, high security, low vulnerability, low costs, and high reliability.
• Work in an agile product team to deliver code frequently using Test Driven Development (TDD), continuous integration and continuous deployment (CI/CD).
• Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle.
• Perform any necessary data mapping, data lineage activities and document information flows.
• Monitor the production pipelines and provide production support by addressing production issues as per SLAs.
• Provide analysis of connected vehicle data to support new product developments and production vehicle improvements.
• Continuously enhance your domain knowledge of connected vehicle data, connected services and algorithms/models/solutions developed by data scientists and AI engineers.
Job Requirements
Details:
Skills Required
• Technical Communication, Communications, Google Cloud Platform, TensorFlow, Data Governance, Machine Learning, Python, Artificial Intelligence & Expert Systems, GitHub, Tekton, Docker, Jira, Microservices, Data Architecture, Agile Software Development, SQL, Java, Spark, Cloud Architecture, Apache Kafka, REST APIs
Skills Preferred
• Telematics, Machine Learning, Data Modeling, Cloud Infrastructure, Data Mining, Database Design, Troubleshooting (Problem Solving), Labor Supervision
Experience Required
• Master's degree or foreign equivalent degree in Computer Science, Software Engineering, Information Systems, Data Engineering, or a related field, and 4 years of experience OR equivalent combination of education and experience (6+ years with Bachelor's Degree).
• 4 years of professional experience in: Data engineering, data product development and software product launches
• At least three of the following languages: Java, Python, Spark, Scala, SQL
• 3 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using: Data warehouses like Amazon Redshift, Microsoft Azure Synapse Analytics, Google BigQuery.
• Workflow orchestration tools like Airflow.
• Relational Database Management System like MySQL, PostgreSQL, and SQL Server.
• Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub
• Microservices architecture to deliver large-scale real-time data processing application.
• REST APIs for compute, storage, operations, and security.
• DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker.
• Project management tools like Atlassian JIRA.
Experience Preferred
• Ph.D. or foreign equivalent degree in Computer Science, Software Engineering, Information System, Data Engineering, or a related field.
• 2 years of experience with ML Model Development and/or MLOps.
• Committed code to improve open-source data/software engineering projects
• Experience architecting cloud infrastructure and handling application migrations/upgrades.
• GCP Professional Certifications.
• Demonstrated passion to mine raw data and realize its hidden value.
• Passion to experiment/implement state of the art data engineering methods/techniques.
• Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment.
• Experience implementing methods for automation of all parts of the pipeline to minimize labor in development and production.
• Analytics skills to profile data, troubleshoot data pipeline/product issues.
• Ability to simplify, clearly communicate complex data/software ideas/problems and work with cross-functional teams and all levels of management independently.
• Ability to mentor and advise junior team members
Education Required
• Bachelor's Degree
Education Preferred
• Master's Degree
• Listed salary ranges may vary based on experience, qualifications, and local market. Also, some positions may include bonuses or other incentives
•
•
• Stefanini takes pride in hiring top talent and developing relationships with our future employees. Our talent acquisition teams will never make an offer of employment without having a phone conversation with you. Those face-to-face conversations will involve a description of the job for which you have applied. We will also speak with you about the process, including interviews and job offers.
About Stefanini Group
The Stefanini Group is a global provider of offshore, onshore and near shore outsourcing, IT digital consulting, systems integration, application, and strategic staffing services to Fortune 1000 enterprises around the world. Our presence is in countries like the Americas, Europe, Africa, and Asia, and more than four hundred clients across a broad spectrum of markets, including financial services, manufacturing, telecommunications, chemical services, technology, public sector, and utilities. Stefanini is a CMM level 5, IT consulting company with a global presence. We are a CMM Level 5 company.






