

ML Engineer--W2 Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer in Dearborn, MI, with a contract length of "unknown" and a pay rate of "unknown." Requires 3+ years of ML deployment experience, expertise in GCP and Python, and onsite work 4 days a week.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 14, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Dearborn, MI
-
π§ - Skills detailed
#Data Quality #Agile #Airflow #Terraform #Kubernetes #Monitoring #Scala #AWS (Amazon Web Services) #Deployment #Automation #Security #TensorFlow #Project Management #Jira #Data Engineering #Data Pipeline #Documentation #Google Cloud Storage #Python #Spark (Apache Spark) #Object Detection #Classification #Computer Science #GCP (Google Cloud Platform) #PySpark #Continuous Deployment #BigQuery #Docker #Model Deployment #DevSecOps #ML (Machine Learning) #Storage #SonarQube #Cloud #Data Governance #Version Control #PyTorch #Statistics #Programming #Libraries #Datasets #GitHub #Azure #Data Science #API (Application Programming Interface)
Role description
W2 only
Zoom
No H1B
Local (Need DL copy) and need last 4 ssn
Onsite
Machine Learning Engineering Engineer
Location: Dearborn, Mi -4 days a week onsite-client looking for local Michigan candidates so send drivers license
Job Description
β’ Employees in this job function are responsible for designing, building, deploying and scaling complex self-running ML solutions in areas like computer vision, perception, localization etc.
β’ They also automate and optimize the end-to-end ML model lifecycle using their expertise in experimental methodologies, statistics, and coding for tool building and analysis.
Key Responsibilities
β’ Collaborate with business and technology stakeholders to understand current and future ML requirements
β’ Design and develop innovative ML models and software algorithms to solve complex business problems in both structured and unstructured environments
β’ Design, build, maintain and optimize scalable ML pipelines, architecture and infrastructure
β’ Adapt machine learning to areas such as virtual reality, augmented reality, object detection, tracking, classification, terrain mapping, and others.
β’ Deploy ML models and algorithms into production and run simulations for algorithm development and test various scenarios
β’ Automate model deployment, training and re-training, leveraging principles of agile methodology, CI/CD/CT (Continuous Integration/ Continuous Deployment/ Continuous Training) and MLOps
β’ Enable model management for model versioning and traceability to ensure modularity and symmetry across environments and models for ML systems
Skills Required
β’ GCP Cloud Run
β’ Kubernetes
β’ Spark
β’ SonarQube
β’ GCP
β’ Google Cloud Platform
β’ Tekton
β’ Python
β’ API
β’ Jira
β’ AWS
Skills Preferred
β’ We are seeking an experienced Machine Learning Engineer to design, implement, and maintain robust analytics pipeline solutions.
β’ These solutions will support the analysis, modeling, and prediction of upstream and downstream auction prices, directly benefiting the Business and Sales Planning Analytics (BSPA) Used Vehicle Analytics team and its customers.
β’ The ideal candidate will excel at developing ML/Software Engineering Solutions, performing DevSecOps, and collaborating with cross-functional teams (including ML Engineers, Data Scientists, and Data Engineers) to improve processes and drive business performance.
Responsibilities
β’ Develop, build and maintain infrastructure required for machine learning, including data pipelines, model deployment platforms, and model monitoring.
β’ Develop and maintain tools and libraries to support the development and deployment of machine learning models.
β’ Automate machine learning workflows using DevSecOps principles and practices.
β’ Collaborate with development and operations teams to implement software solutions that improve system integration and automation of ML pipelines.
β’ Design, develop, and manage data flows and APIs between upstream systems and applications.
β’ Troubleshoot and resolve issues related to system communication, data flow, and data quality.
β’ Collaborate with technical and non-technical teams to gather integration requirements and ensure successful deployment of data solutions.
β’ Create and maintain comprehensive technical documentation of software components.
β’ Work with IT to ensure systems meet evolving business needs and comply with data governance policies and security requirements.
β’ Implement and enforce the highest standards of data quality and integrity across all data processes.
β’ Manage deliverables through project management tools.
Experience Required
β’ Bachelor's degree in Computer Science, Information Systems, or a related field.
β’ 3+ years of experience in developing and deploying machine learning models in a production environment.
β’ 3+ years of experience in programming with Python
β’ 2+ years of hands-on experience utilizing Google Cloud Platform (GCP) services, including BigQuery and Google Cloud Storage to efficiently manage and process large datasets, as well as Cloud Composer and/or Cloud Run.
β’ Experience with version control systems like GitHub for managing code repositories and collaboration.
β’ 2+ years of experience with code quality and security scanning tools, such as, SonarQube, Cycode and FOSSA.
β’ 3+ years of experience with data engineering tools and technologies, such as, Kubernetes, Container-as-a-Service (CaaS) platforms, OpenShift, DataProc, Spark (with PySpark) or Airflow.
β’ Experience with CI/CD practices and tools, including Tekton or Terraform, as well as containerization technologies like Docker or Kubernetes.
β’ Excellent problem-solving and analytical skills, with a focus on data-driven solutions.
β’ Familiarity with cloud computing platforms like AWS, Azure, or Google Cloud Platform.
β’ Familiarity with Atlassian project management tools (e.g., Jira, Confluence) and agile practices.
Experience Preferred
β’ 5+ years of experience in the automotive industry, particularly in auto remarketing and sales.
β’ Master's degree in a relevant field (e.g., Computer Science, Data Science, Engineering).
β’ Proven ability to thrive in dynamic environments, managing multiple priorities and delivering high-impact results even with limited information.
β’ Exceptional problem-solving skills, a proactive and strategic mindset, and a passion for technical excellence and innovation in data engineering.
β’ Demonstrated commitment to continuous learning and professional development.
β’ Familiarity with machine learning libraries, such as TensorFlow, PyTorch, or Scikit-learn
β’ Experience with MLOps tools and platforms.
Education Required
β’ Bachelor's Degree
Education Preferred
β’ Master's Degree
W2 only
Zoom
No H1B
Local (Need DL copy) and need last 4 ssn
Onsite
Machine Learning Engineering Engineer
Location: Dearborn, Mi -4 days a week onsite-client looking for local Michigan candidates so send drivers license
Job Description
β’ Employees in this job function are responsible for designing, building, deploying and scaling complex self-running ML solutions in areas like computer vision, perception, localization etc.
β’ They also automate and optimize the end-to-end ML model lifecycle using their expertise in experimental methodologies, statistics, and coding for tool building and analysis.
Key Responsibilities
β’ Collaborate with business and technology stakeholders to understand current and future ML requirements
β’ Design and develop innovative ML models and software algorithms to solve complex business problems in both structured and unstructured environments
β’ Design, build, maintain and optimize scalable ML pipelines, architecture and infrastructure
β’ Adapt machine learning to areas such as virtual reality, augmented reality, object detection, tracking, classification, terrain mapping, and others.
β’ Deploy ML models and algorithms into production and run simulations for algorithm development and test various scenarios
β’ Automate model deployment, training and re-training, leveraging principles of agile methodology, CI/CD/CT (Continuous Integration/ Continuous Deployment/ Continuous Training) and MLOps
β’ Enable model management for model versioning and traceability to ensure modularity and symmetry across environments and models for ML systems
Skills Required
β’ GCP Cloud Run
β’ Kubernetes
β’ Spark
β’ SonarQube
β’ GCP
β’ Google Cloud Platform
β’ Tekton
β’ Python
β’ API
β’ Jira
β’ AWS
Skills Preferred
β’ We are seeking an experienced Machine Learning Engineer to design, implement, and maintain robust analytics pipeline solutions.
β’ These solutions will support the analysis, modeling, and prediction of upstream and downstream auction prices, directly benefiting the Business and Sales Planning Analytics (BSPA) Used Vehicle Analytics team and its customers.
β’ The ideal candidate will excel at developing ML/Software Engineering Solutions, performing DevSecOps, and collaborating with cross-functional teams (including ML Engineers, Data Scientists, and Data Engineers) to improve processes and drive business performance.
Responsibilities
β’ Develop, build and maintain infrastructure required for machine learning, including data pipelines, model deployment platforms, and model monitoring.
β’ Develop and maintain tools and libraries to support the development and deployment of machine learning models.
β’ Automate machine learning workflows using DevSecOps principles and practices.
β’ Collaborate with development and operations teams to implement software solutions that improve system integration and automation of ML pipelines.
β’ Design, develop, and manage data flows and APIs between upstream systems and applications.
β’ Troubleshoot and resolve issues related to system communication, data flow, and data quality.
β’ Collaborate with technical and non-technical teams to gather integration requirements and ensure successful deployment of data solutions.
β’ Create and maintain comprehensive technical documentation of software components.
β’ Work with IT to ensure systems meet evolving business needs and comply with data governance policies and security requirements.
β’ Implement and enforce the highest standards of data quality and integrity across all data processes.
β’ Manage deliverables through project management tools.
Experience Required
β’ Bachelor's degree in Computer Science, Information Systems, or a related field.
β’ 3+ years of experience in developing and deploying machine learning models in a production environment.
β’ 3+ years of experience in programming with Python
β’ 2+ years of hands-on experience utilizing Google Cloud Platform (GCP) services, including BigQuery and Google Cloud Storage to efficiently manage and process large datasets, as well as Cloud Composer and/or Cloud Run.
β’ Experience with version control systems like GitHub for managing code repositories and collaboration.
β’ 2+ years of experience with code quality and security scanning tools, such as, SonarQube, Cycode and FOSSA.
β’ 3+ years of experience with data engineering tools and technologies, such as, Kubernetes, Container-as-a-Service (CaaS) platforms, OpenShift, DataProc, Spark (with PySpark) or Airflow.
β’ Experience with CI/CD practices and tools, including Tekton or Terraform, as well as containerization technologies like Docker or Kubernetes.
β’ Excellent problem-solving and analytical skills, with a focus on data-driven solutions.
β’ Familiarity with cloud computing platforms like AWS, Azure, or Google Cloud Platform.
β’ Familiarity with Atlassian project management tools (e.g., Jira, Confluence) and agile practices.
Experience Preferred
β’ 5+ years of experience in the automotive industry, particularly in auto remarketing and sales.
β’ Master's degree in a relevant field (e.g., Computer Science, Data Science, Engineering).
β’ Proven ability to thrive in dynamic environments, managing multiple priorities and delivering high-impact results even with limited information.
β’ Exceptional problem-solving skills, a proactive and strategic mindset, and a passion for technical excellence and innovation in data engineering.
β’ Demonstrated commitment to continuous learning and professional development.
β’ Familiarity with machine learning libraries, such as TensorFlow, PyTorch, or Scikit-learn
β’ Experience with MLOps tools and platforms.
Education Required
β’ Bachelor's Degree
Education Preferred
β’ Master's Degree