Machine Learning Engineer (28484)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer on a 6-month W2 contract in Brooklyn Park, MN, offering $60-$90/hour. Key skills include Apache Spark, PySpark, Python, Bash, and Docker. Experience in big data ecosystems and troubleshooting distributed systems is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
720
-
🗓️ - Date discovered
September 25, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Brooklyn Park, MN
-
🧠 - Skills detailed
#Apache Spark #Terraform #Observability #R #YARN (Yet Another Resource Negotiator) #Monitoring #Consulting #Bash #REST API #GitHub #Hadoop #Code Reviews #Batch #Big Data #REST (Representational State Transfer) #API (Application Programming Interface) #ML (Machine Learning) #Spark (Apache Spark) #Scripting #Airflow #Docker #Grafana #PySpark #Forecasting #GIT #Python
Role description
Machine Learning Engineer (W2 Contract - not open for C2C/1099) Brooklyn Park, MN | Hybrid 6-Month Contract | Start: October 2025 Pay Range: $60 – $90/hour (W2) + benefits About the Role We’re looking for an experienced Machine Learning Engineer to join our team on a 6-month contract, with potential extension. In this role, you’ll support and enhance large-scale batch forecasting workflows running on Spark clusters, leveraging R, Python, Bash, and Terraform. You’ll balance daily operational support with code improvements, ensuring high system reliability while driving continuous enhancements. If you thrive at the intersection of big data, machine learning, and distributed systems, this role gives you the opportunity to make an immediate impact. What You’ll Do • Monitor & Troubleshoot large-scale workflows on Spark/YARN to ensure smooth daily operations. • Enhance existing jobs using Python (PySpark), Bash, R, and Terraform to improve functionality, stability, and performance. • Implement observability & monitoring with OTEL, Kibana, Grafana, and custom instrumentation. • Collaborate via GitHub, contributing code through PRs, reviews, and branching strategies. • Balance ops and dev by minimizing downtime while proactively building system improvements. Must-Have Skills • Strong experience with Apache Spark, PySpark, Hadoop, Hive, and Big Data ecosystems. • Proficiency with Python, Bash, and working knowledge of R. • Hands-on with Git/GitHub workflows (PRs, code reviews). • Docker for containerized environments. • Troubleshooting distributed systems at scale. Nice-to-Have Skills • Airflow (or equivalent orchestration tools). • Terraform for infrastructure-as-code. • Grafana/Kibana for monitoring. • MLOps practices and OpenTelemetry (OTEL). • Bash scripting & REST API integrations. What We’re Looking For • A problem-solver with strong analytical and troubleshooting skills. • Someone who communicates clearly and thrives in cross-functional collaboration. • A proactive engineer who prioritizes operational reliability while driving innovation. Dahl Consulting is proud to offer a comprehensive benefits package to eligible employees that will allow you to choose the best coverage to meet your family’s needs. For details, please review the DAHL Benefits Summary: https://www.dahlconsulting.com/benefits-w2fta/