

EPITEC
Machine Learning Engineering Senior Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Machine Learning Engineer (W2 Contract) in Dearborn, MI, with a pay rate of $70+ per hour. Requires 4+ years of data engineering experience, strong GCP skills, and expertise in ML Ops, Python, and SQL.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
May 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dearborn, MI
-
🧠 - Skills detailed
#Databases #REST (Representational State Transfer) #Terraform #Apache Kafka #Migration #Python #Project Management #PostgreSQL #ML (Machine Learning) #Data Modeling #Security #Data Science #Data Pipeline #Documentation #Data Architecture #Docker #Database Design #Batch #Jira #REST API #Scala #Spark (Apache Spark) #Kafka (Apache Kafka) #Telematics #MySQL #TensorFlow #SQL Server #BigQuery #Agile #DevOps #GCP (Google Cloud Platform) #Data Governance #GitHub #Data Mapping #Cloud #Redshift #Data Mining #SonarQube #Synapse #Azure #Microservices #SQL (Structured Query Language) #ML Ops (Machine Learning Operations) #Data Quality #Scrum #Java #AI (Artificial Intelligence) #Airflow #Data Engineering
Role description
Job Title: Senior Machine Learning Engineer (W2 Contract, NO C2C)
Location: Dearborn, MI. (Must be local)
Job Type: Engineering
Expected hours per week: 40 hours per week
Schedule: Onsite
Pay Range: $70+ an hour
Job Description
Position Description
We are seeking an ML Ops / Data Platform Engineer to build and maintain scalable machine learning and data platforms supporting connected vehicle and agentic AI initiatives. This role focuses on designing robust cloud-based data pipelines, optimizing ML solutions, and enabling secure, reliable, and cost?effective production systems on Google Cloud Platform (GCP).
You will work closely with data scientists, analytics stakeholders, and product teams to deliver high?quality data and ML solutions across streaming and batch pipelines while promoting best practices in data governance, DevOps, and software quality.
Key Responsibilities
ML Ops & Data Engineering
• Build scalable ML data pipelines in the cloud to process large volumes of connected vehicle data
• Optimize ML solutions for performance, security, reliability, and cost
• Support continual learning approaches to improve production model performance
• Develop analytical data products using streaming and batch ingestion patterns on GCP
• Monitor data quality and ML model performance across pipelines and platforms
Platform & Infrastructure
• Maintain and enhance data platform infrastructure using Terraform
• Design and maintain CI/CD pipelines for data and ML workloads
• Enhance DevOps capabilities across the data platform
• Monitor production pipelines and provide operational support according to SLAs
Architecture, Governance & Quality
• Implement and promote enterprise data governance models, including data protection, quality, lineage, and standards
• Perform data mapping, lineage documentation, and information flow analysis
• Address code quality and security findings using tools such as SonarQube, Checkmarx, Fossa, and Cycode
• Continuously optimize existing pipelines, platforms, and infrastructure
Collaboration & Communication
• Collaborate with analytics, data science, and business stakeholders to streamline data acquisition and delivery
• Provide analysis of connected vehicle data to support new product development and vehicle improvements
• Communicate complex technical concepts clearly to both technical and non?technical audiences
• Work in an Agile product team using TDD, CI, and CD practices
Required Skills
• Strong technical communication and stakeholder collaboration skills
• Machine Learning and ML Ops experience
• Google Cloud Platform (GCP) – deep hands-on experience required
• Python, SQL, and Java
• Data engineering and data architecture
• Streaming and batch data pipelines
• Apache Kafka or GCP Pub/Sub
• Spark
• REST APIs and microservices
• CI/CD, GitHub, Docker, Tekton
• Agile software development (Scrum)
• Data governance concepts and implementation
Preferred Skills
• TensorFlow
• Telematics or connected vehicle data
• Data modeling, data mining, and database design
• Cloud infrastructure architecture
• Troubleshooting and problem solving
• Experience mentoring junior engineers
Experience Requirements
• PhD
• 4+ years in data engineering, data products, or software product development
• Experience with at least three of the following: Java, Python, Spark/Scala, SQL
• 3+ years building production batch and streaming pipelines using:
• BigQuery, Redshift, or Azure Synapse
• Airflow or similar orchestration tools
• Relational databases (PostgreSQL, MySQL, SQL Server)
• Kafka or Pub/Sub
• Microservices architectures
• Terraform, GitHub Actions, Docker
• Jira or similar project management tools
Nice to Have
• ML model development or ML Ops experience
• GCP certifications
• Experience with cloud migrations or platform modernization
• Open-source contributions
• Automotive or connected services domain experience
• Passion for modern data engineering and ML platform design
Benefits: 80 hours paid time off, medical insurance contributions, dental vision and our 401k retirement savings plan
Job Title: Senior Machine Learning Engineer (W2 Contract, NO C2C)
Location: Dearborn, MI. (Must be local)
Job Type: Engineering
Expected hours per week: 40 hours per week
Schedule: Onsite
Pay Range: $70+ an hour
Job Description
Position Description
We are seeking an ML Ops / Data Platform Engineer to build and maintain scalable machine learning and data platforms supporting connected vehicle and agentic AI initiatives. This role focuses on designing robust cloud-based data pipelines, optimizing ML solutions, and enabling secure, reliable, and cost?effective production systems on Google Cloud Platform (GCP).
You will work closely with data scientists, analytics stakeholders, and product teams to deliver high?quality data and ML solutions across streaming and batch pipelines while promoting best practices in data governance, DevOps, and software quality.
Key Responsibilities
ML Ops & Data Engineering
• Build scalable ML data pipelines in the cloud to process large volumes of connected vehicle data
• Optimize ML solutions for performance, security, reliability, and cost
• Support continual learning approaches to improve production model performance
• Develop analytical data products using streaming and batch ingestion patterns on GCP
• Monitor data quality and ML model performance across pipelines and platforms
Platform & Infrastructure
• Maintain and enhance data platform infrastructure using Terraform
• Design and maintain CI/CD pipelines for data and ML workloads
• Enhance DevOps capabilities across the data platform
• Monitor production pipelines and provide operational support according to SLAs
Architecture, Governance & Quality
• Implement and promote enterprise data governance models, including data protection, quality, lineage, and standards
• Perform data mapping, lineage documentation, and information flow analysis
• Address code quality and security findings using tools such as SonarQube, Checkmarx, Fossa, and Cycode
• Continuously optimize existing pipelines, platforms, and infrastructure
Collaboration & Communication
• Collaborate with analytics, data science, and business stakeholders to streamline data acquisition and delivery
• Provide analysis of connected vehicle data to support new product development and vehicle improvements
• Communicate complex technical concepts clearly to both technical and non?technical audiences
• Work in an Agile product team using TDD, CI, and CD practices
Required Skills
• Strong technical communication and stakeholder collaboration skills
• Machine Learning and ML Ops experience
• Google Cloud Platform (GCP) – deep hands-on experience required
• Python, SQL, and Java
• Data engineering and data architecture
• Streaming and batch data pipelines
• Apache Kafka or GCP Pub/Sub
• Spark
• REST APIs and microservices
• CI/CD, GitHub, Docker, Tekton
• Agile software development (Scrum)
• Data governance concepts and implementation
Preferred Skills
• TensorFlow
• Telematics or connected vehicle data
• Data modeling, data mining, and database design
• Cloud infrastructure architecture
• Troubleshooting and problem solving
• Experience mentoring junior engineers
Experience Requirements
• PhD
• 4+ years in data engineering, data products, or software product development
• Experience with at least three of the following: Java, Python, Spark/Scala, SQL
• 3+ years building production batch and streaming pipelines using:
• BigQuery, Redshift, or Azure Synapse
• Airflow or similar orchestration tools
• Relational databases (PostgreSQL, MySQL, SQL Server)
• Kafka or Pub/Sub
• Microservices architectures
• Terraform, GitHub Actions, Docker
• Jira or similar project management tools
Nice to Have
• ML model development or ML Ops experience
• GCP certifications
• Experience with cloud migrations or platform modernization
• Open-source contributions
• Automotive or connected services domain experience
• Passion for modern data engineering and ML platform design
Benefits: 80 hours paid time off, medical insurance contributions, dental vision and our 401k retirement savings plan






