FDM Group

ML Ops Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a ML Ops Engineer in Cary, NC, with a contract length of 12 months, paying competitively. Key skills include expertise in GCP, Python, and CI/CD tools. A bachelor's degree and 5+ years in MLOps or DevOps are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 28, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Cary, NC
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Cloud #Docker #Kubernetes #Jenkins #Observability #AI (Artificial Intelligence) #DevOps #Data Governance #Compliance #Monitoring #Deployment #Terraform #Security #Scala #GCP (Google Cloud Platform) #MLflow #Logging #GitHub #ML Ops (Machine Learning Operations) #Cybersecurity #Computer Science #BigQuery #Batch #Data Science #Data Engineering #Python #ML (Machine Learning)
Role description
About The Role This position requires the successful candidate to work on a W2 directly with FDM. We cannot accept C2C, 1099 or employment sponsorship (e.g. H1-B) for this position. FDM is a global business and technology consultancy delivering client and industry driven solutions through our 5 core specialist Practices; Software Engineering, Data & Analytics, IT Operations, Change & Transformation, and Risk, Regulation & Compliance. FDM is seeking a ML Ops Engineer located in Cary, NC to support a project in the Financial Services sector. Involvement in this project is anticipated to last initially 12 month but may be extended. This role will be hybrid with requirements to be in office 3 days per week. We are seeking a skilled MLOps Engineer with deep expertise in Google Cloud Platform (GCP) to help us build a world-class machine learning and AI capabilities within the bank. You will be instrumental in designing, implementing, and maintaining scalable infrastructure and automated pipelines that support the full machine learning lifecycleβ€”from experimentation to deployment and monitoring. This role involves close collaboration with data scientists, data engineers, product managers, and platform teams to operationalize models, streamline workflows, and uphold the highest standards of security, privacy, and compliance. You’ll help define and evolve our ML Ops practices, ensuring our AI solutions are reliable, reproducible, and impactful. Key Responsibilities β€’ Build and maintain CI/CD pipelines for ML workflows using GCP-native tools such as Cloud Build, Artifact Registry, and Cloud Deploy. β€’ Containerize and orchestrate ML workloads using Docker, Kubernetes, and GKE (Google Kubernetes Engine). β€’ Collaborate with cross-functional teams to transition models from development to production, integrating them into customer-facing applications. β€’ Implement robust model monitoring, logging, and alerting using tools like Vertex AI Model Monitoring, Cloud Logging, and Cloud Monitoring. β€’ Define and enforce best practices for model versioning, testing, and reproducibility using tools like MLflow and Vertex AI Pipelines. β€’ Ensure infrastructure adheres to security and compliance standards, working closely with Cybersecurity and Data Governance teams. β€’ Continuously evaluate and integrate emerging GCP technologies to enhance platform capabilities and delivery speed. About You Required Skills & Experience β€’ Bachelor’s degree in Computer Science, Software Engineering, or related field. β€’ 5+ years of experience in MLOps or DevOps roles, with a strong focus on cloud-native ML infrastructure. β€’ Proven experience deploying ML models in production (batch and real-time), ideally in regulated or privacy-sensitive environments. β€’ Proficiency in Python, with solid software engineering fundamentals and experience using Terraform or Deployment Manager for infrastructure-as-code. β€’ Hands-on experience with: β€’ GCP ML tools: Vertex AI, AI Platform, BigQuery ML β€’ CI/CD: Cloud Build, GitHub Actions, Jenkins β€’ Containerization & Orchestration: Docker, Kubernetes, GKE Preferred Qualifications β€’ Experience deploying and managing generative AI models (LLMs) in production, including prompt engineering, evaluation pipelines, and safety guardrails. β€’ Familiarity with observability tools such as MLflow, LangFuse, or Braintrust. β€’ Exposure to data governance and privacy frameworks in cloud environments. About FDM FDM is an award-winning global business and technology consultancy powering the people behind tech and innovation for over 30 years. We collaborate with world-leading companies to identify the expertise they need, exactly when they need it. We have helped successfully launch nearly 25,000 careers globally to date and are a trusted partner to over 300 companies worldwide. FDM has 2500+ employees worldwide, with over 80 nationalities working together as a team. From our origins in Brighton, UK, FDM now has 19 centers located across North America, Europe and Asia-Pacific and is now on the FTSE4Good Index.