

MLOps Engineer - Only W2
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an MLOps Engineer on a 12-month contract, paying "pay rate". It is remote in North Carolina, requiring 4+ years of MLOps experience, GCP proficiency, Python skills, and solid SQL knowledge.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 20, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#BigQuery #MLflow #IAM (Identity and Access Management) #Deployment #ML (Machine Learning) #AI (Artificial Intelligence) #Python #Data Engineering #Docker #TensorFlow #Monitoring #Cloud #Security #Compliance #SQL (Structured Query Language) #Data Science #Logging #Data Modeling #Batch #AutoScaling #PyTorch #Datasets #GitHub #Infrastructure as Code (IaC) #GCP (Google Cloud Platform) #Observability #Terraform #Storage #DevOps
Role description
MLOps Engineer
12 Months Contract
REMOTE, North Carolina
Customer: CenterPoint Energy
Job Description
Location: Remote (working in CST hours)
Project Details: Own the end-to-end lifecycle of production ML: training, packaging, deployment, monitoring, and governance. Build reusable pipelines and tooling so data scientists and contractors can ship reliable models quickly - batch and real-time - on Google Cloud.
Must Have Skills:
β’ 4+ years of MLOps/ML platform or DevOps for data/ML systems
β’ Hands-on GCP experience: BigQuery, Cloud Run, Cloud Storage, Pub/Sub, Cloud Build (Vertex AI a plus)
β’ Proficiency with Python, packaging (Docker), and CI/CD
β’ Solid SQL skills and understanding of data modeling for ML features/labels
β’ Experience operating production models with monitoring, alerting, and incident response
Soft Skills:
Nice to have Skills:
β’ Model registry & experiment tracking (ML Flow, W&B, or Vertex AI)
β’ Data validation & monitoring (Great Expectations, TensorFlow Data Validation, WhyLabs, Arize)
β’ Feature store concepts (BQ-based or managed)
β’ Canary/shadow deployments, autoscaling, and performance tuning
β’ IaC (Terraform), testing frameworks (unit/integration/lead), and observability (OpenTelemetry, Cloud Monitoring)
Day-to-day responsibilities:
β’ Pipelines & orchestration: Design CI/CD and scheduled pipelines for training and inference (Cloud Build, Workflows/Scheduler, Pub/Sub, Cloud Run; Vertex Pipelines if used).
β’ Packaging & deployment: Standardize model packaging (Docker), artifact/versioning, and rollout strategies (A/B, canary, shadow) with automated rollbacks.
β’ Data/feature flows: Define contracts for features/labels in BigQuery and manage backfills; support batch and (where applicable) streaming features.
β’ Registry & experimentation: Stand up a model registry and experiment tracking (MLflow/Weights & Biases/Vertex) with approvals and audit trails.
β’ Monitoring & quality: Implement data/feature validation, drift/decay monitoring, performance/latency SLOs, and alerting; build dashboards and playbooks.
β’ Security & compliance: Enforce IAM least privilege, service accounts, Secrets Manager, provenance/lineage, and change management.
β’ Cost & performance: Track training/inference cost and latency; optimize hardware/ autoscaling and query patterns.
β’ Enablement: Create templates, docs, and tooling so DS/contractors can add models with minimal friction.
Tech stack youβll use
β’ Compute/Orchestration: Cloud Run, Workflows/Scheduler, Pub/Sub, Vertex Pipelines (optional)
β’ Data/Storage: BigQuery, Cloud Storage (artifacts, datasets)
β’ CI/CD & IaC: Cloud Build or GitHub Actions, Terraform
β’ ML Tooling: MLflow/W&B/Vertex, Docker, PyTorch/TF/XGBoost (as provided by DS)
β’ Monitoring: Cloud Logging/Monitoring, Evidently/WhyLabs/Arize, custom run IDs & metrics
How we work
β’ Small, versioned releases; test-first pipelines; documented runbooks.
β’ Clear SLOs and blameless incident reviews.
β’ Close partnership with Data Engineering and Data Science; contracts over assumptions
MLOps Engineer
12 Months Contract
REMOTE, North Carolina
Customer: CenterPoint Energy
Job Description
Location: Remote (working in CST hours)
Project Details: Own the end-to-end lifecycle of production ML: training, packaging, deployment, monitoring, and governance. Build reusable pipelines and tooling so data scientists and contractors can ship reliable models quickly - batch and real-time - on Google Cloud.
Must Have Skills:
β’ 4+ years of MLOps/ML platform or DevOps for data/ML systems
β’ Hands-on GCP experience: BigQuery, Cloud Run, Cloud Storage, Pub/Sub, Cloud Build (Vertex AI a plus)
β’ Proficiency with Python, packaging (Docker), and CI/CD
β’ Solid SQL skills and understanding of data modeling for ML features/labels
β’ Experience operating production models with monitoring, alerting, and incident response
Soft Skills:
Nice to have Skills:
β’ Model registry & experiment tracking (ML Flow, W&B, or Vertex AI)
β’ Data validation & monitoring (Great Expectations, TensorFlow Data Validation, WhyLabs, Arize)
β’ Feature store concepts (BQ-based or managed)
β’ Canary/shadow deployments, autoscaling, and performance tuning
β’ IaC (Terraform), testing frameworks (unit/integration/lead), and observability (OpenTelemetry, Cloud Monitoring)
Day-to-day responsibilities:
β’ Pipelines & orchestration: Design CI/CD and scheduled pipelines for training and inference (Cloud Build, Workflows/Scheduler, Pub/Sub, Cloud Run; Vertex Pipelines if used).
β’ Packaging & deployment: Standardize model packaging (Docker), artifact/versioning, and rollout strategies (A/B, canary, shadow) with automated rollbacks.
β’ Data/feature flows: Define contracts for features/labels in BigQuery and manage backfills; support batch and (where applicable) streaming features.
β’ Registry & experimentation: Stand up a model registry and experiment tracking (MLflow/Weights & Biases/Vertex) with approvals and audit trails.
β’ Monitoring & quality: Implement data/feature validation, drift/decay monitoring, performance/latency SLOs, and alerting; build dashboards and playbooks.
β’ Security & compliance: Enforce IAM least privilege, service accounts, Secrets Manager, provenance/lineage, and change management.
β’ Cost & performance: Track training/inference cost and latency; optimize hardware/ autoscaling and query patterns.
β’ Enablement: Create templates, docs, and tooling so DS/contractors can add models with minimal friction.
Tech stack youβll use
β’ Compute/Orchestration: Cloud Run, Workflows/Scheduler, Pub/Sub, Vertex Pipelines (optional)
β’ Data/Storage: BigQuery, Cloud Storage (artifacts, datasets)
β’ CI/CD & IaC: Cloud Build or GitHub Actions, Terraform
β’ ML Tooling: MLflow/W&B/Vertex, Docker, PyTorch/TF/XGBoost (as provided by DS)
β’ Monitoring: Cloud Logging/Monitoring, Evidently/WhyLabs/Arize, custom run IDs & metrics
How we work
β’ Small, versioned releases; test-first pipelines; documented runbooks.
β’ Clear SLOs and blameless incident reviews.
β’ Close partnership with Data Engineering and Data Science; contracts over assumptions