

Jobs via Dice
Sr MLOPS Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr MLOps Engineer on a 3-month contract, with potential for hire, based in Pittsburgh, PA or Strongsville, OH. Requires 5+ years in software/data engineering, expertise in Python, Hadoop, and CI/CD tools.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 11, 2025
🕒 - Duration
3 to 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Pittsburgh, PA
-
🧠 - Skills detailed
#Deployment #ML (Machine Learning) #AWS (Amazon Web Services) #Data Science #Data Processing #Pandas #Spark (Apache Spark) #Hadoop #PySpark #Monitoring #SageMaker #Python #Distributed Computing #Kafka (Apache Kafka) #Data Engineering
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, AGM Tech Solutions, LLC, is seeking the following. Apply via Dice today!
Job Title: Software Engineer Sr.
Position Type: 3 months Contract (This position is contract with the right to hire if a need becomes available)
The candidate must live local to one of the locations listed below.
Pittsburgh, PA 15222 OR Strongsville, OH 44136
• The candidate must be authorized to work for any employer on W2.
Job Responsibilities:
• Refactor and modularize ML codebases to improve reusability, maintainability, and performance.
• Collaborate with platform teams to manage compute capacity, resource allocation, and system updates.
• Integrate with existing Model Serving Framework to support testing, deployment, and rollback of ML workflows.
• Monitor and troubleshoot production ML pipelines, ensuring high reliability, low latency, and cost efficiency.
• Contribute to internal Model Serving Framework by sharing insights, proposing and implementing improvements, and documenting best practices.
• (Nice to Have) Experience implementing near real-time ML pipelines using Kafka and Spark Streaming for low latency use cases. Experience with AWS and the sagemaker MLOPs ecosystem.
Must Have Technical Skills:
• Expert-level proficiency in Python, with strong experience in Pandas, PySpark, and PyArrow.
• Expert-level proficiency in Hadoop ecosystem, distributed computing, and performance tuning.
• 5+ years of experience in software engineering, data engineering, or MLOps roles.
• Experience with CI/CD tools and best practices in ML environments.
• Experience with monitoring tools and techniques for ML pipeline health and performance.
• Strong collaboration skills, especially in cross-functional environments involving platform and data science teams.
Preferred Skills:
• Experience contributing to internal MLOps frameworks or platforms.
• Familiarity with SLURM clusters or other distributed job schedulers.
• Exposure to Kafka, Spark Streaming, or other real-time data processing tools.
• Knowledge of model lifecycle management, including versioning, deployment, and drift detection.
Dice is the leading career destination for tech experts at every stage of their careers. Our client, AGM Tech Solutions, LLC, is seeking the following. Apply via Dice today!
Job Title: Software Engineer Sr.
Position Type: 3 months Contract (This position is contract with the right to hire if a need becomes available)
The candidate must live local to one of the locations listed below.
Pittsburgh, PA 15222 OR Strongsville, OH 44136
• The candidate must be authorized to work for any employer on W2.
Job Responsibilities:
• Refactor and modularize ML codebases to improve reusability, maintainability, and performance.
• Collaborate with platform teams to manage compute capacity, resource allocation, and system updates.
• Integrate with existing Model Serving Framework to support testing, deployment, and rollback of ML workflows.
• Monitor and troubleshoot production ML pipelines, ensuring high reliability, low latency, and cost efficiency.
• Contribute to internal Model Serving Framework by sharing insights, proposing and implementing improvements, and documenting best practices.
• (Nice to Have) Experience implementing near real-time ML pipelines using Kafka and Spark Streaming for low latency use cases. Experience with AWS and the sagemaker MLOPs ecosystem.
Must Have Technical Skills:
• Expert-level proficiency in Python, with strong experience in Pandas, PySpark, and PyArrow.
• Expert-level proficiency in Hadoop ecosystem, distributed computing, and performance tuning.
• 5+ years of experience in software engineering, data engineering, or MLOps roles.
• Experience with CI/CD tools and best practices in ML environments.
• Experience with monitoring tools and techniques for ML pipeline health and performance.
• Strong collaboration skills, especially in cross-functional environments involving platform and data science teams.
Preferred Skills:
• Experience contributing to internal MLOps frameworks or platforms.
• Familiarity with SLURM clusters or other distributed job schedulers.
• Exposure to Kafka, Spark Streaming, or other real-time data processing tools.
• Knowledge of model lifecycle management, including versioning, deployment, and drift detection.






