

ConsultUSA
MLOps Engineer (W2 Only)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer (W2 Only) with a contract length of over 6 months, offering competitive pay. Key skills include 5+ years in software/data engineering, expert Python and Hadoop proficiency, and experience with CI/CD in ML environments.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 6, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Unknown
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Pittsburgh, PA
-
π§ - Skills detailed
#Distributed Computing #Deployment #PySpark #Spark (Apache Spark) #Monitoring #Data Processing #Data Engineering #Pandas #Python #Kafka (Apache Kafka) #ML (Machine Learning) #Hadoop
Role description
β’ This opening is a contract opportunity with potential for full-time conversion. As such, our client is seeking candidates immediately hire-able who will not require sponsorship in the future
β’ Our client has an immediate need for a MLOps Engineer to support the development, deployment, and maintenance of large-scale ML pipelines. This role will collaborate closely with cross-functional teams to optimize workflows, ensure system reliability, and contribute to internal MLOps frameworks.
Technical Skills:
Must-Have (5+ years):
β’ 5+ years of experience in software engineering, data engineering, or MLOps
β’ Expert-level proficiency in Python, including Pandas, PySpark, and PyArrow
β’ Expert-level proficiency in the Hadoop ecosystem, distributed computing, and performance tuning
β’ Experience with CI/CD tools and best practices in ML environments
β’ Experience with monitoring tools and techniques for ML pipeline health and performance
β’ Strong collaboration skills in cross-functional teams
Nice-to-Have:
β’ Experience contributing to internal MLOps frameworks or platforms
β’ Familiarity with SLURM clusters or other distributed job schedulers
β’ Exposure to Kafka, Spark Streaming, or other real-time data processing tools
β’ Knowledge of model lifecycle management, including versioning, deployment, and drift detection
Education / Certifications:
β’ Bachelorβs degree in a technical field preferred
β’ SAFe certification is a plus
Key Responsibilities
β’ Optimize and maintain large-scale feature engineering jobs using PySpark, Pandas, and PyArrow on Hadoop infrastructure
β’ Refactor and modularize ML codebases to improve reusability, maintainability, and performance
β’ Collaborate with platform teams to manage compute capacity, resource allocation, and system updates
β’ Integrate with Model Serving Framework for testing, deployment, and rollback of ML workflows
β’ Monitor and troubleshoot production ML pipelines, ensuring high reliability, low latency, and cost efficiency
β’ Contribute to internal Model Serving Framework by proposing improvements and documenting best practices.
β’ This opening is a contract opportunity with potential for full-time conversion. As such, our client is seeking candidates immediately hire-able who will not require sponsorship in the future
β’ Our client has an immediate need for a MLOps Engineer to support the development, deployment, and maintenance of large-scale ML pipelines. This role will collaborate closely with cross-functional teams to optimize workflows, ensure system reliability, and contribute to internal MLOps frameworks.
Technical Skills:
Must-Have (5+ years):
β’ 5+ years of experience in software engineering, data engineering, or MLOps
β’ Expert-level proficiency in Python, including Pandas, PySpark, and PyArrow
β’ Expert-level proficiency in the Hadoop ecosystem, distributed computing, and performance tuning
β’ Experience with CI/CD tools and best practices in ML environments
β’ Experience with monitoring tools and techniques for ML pipeline health and performance
β’ Strong collaboration skills in cross-functional teams
Nice-to-Have:
β’ Experience contributing to internal MLOps frameworks or platforms
β’ Familiarity with SLURM clusters or other distributed job schedulers
β’ Exposure to Kafka, Spark Streaming, or other real-time data processing tools
β’ Knowledge of model lifecycle management, including versioning, deployment, and drift detection
Education / Certifications:
β’ Bachelorβs degree in a technical field preferred
β’ SAFe certification is a plus
Key Responsibilities
β’ Optimize and maintain large-scale feature engineering jobs using PySpark, Pandas, and PyArrow on Hadoop infrastructure
β’ Refactor and modularize ML codebases to improve reusability, maintainability, and performance
β’ Collaborate with platform teams to manage compute capacity, resource allocation, and system updates
β’ Integrate with Model Serving Framework for testing, deployment, and rollback of ML workflows
β’ Monitor and troubleshoot production ML pipelines, ensuring high reliability, low latency, and cost efficiency
β’ Contribute to internal Model Serving Framework by proposing improvements and documenting best practices.






