

ML Engineer with Databricks
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a long-term ML Engineer with Databricks in Winston-Salem, NC, offering a competitive pay rate. Requires 8+ years of experience in MLOps, proficiency in Databricks, Python, SQL, and cloud platforms (Azure, AWS, GCP).
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date discovered
August 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Winston-Salem, NC
-
π§ - Skills detailed
#Azure DevOps #Libraries #Monitoring #SQL (Structured Query Language) #ML Ops (Machine Learning Operations) #Model Deployment #Jenkins #Security #Delta Lake #Python #PySpark #Cloud #DevOps #AWS SageMaker #REST API #REST (Representational State Transfer) #Observability #Data Pipeline #GitHub #AI (Artificial Intelligence) #GCP (Google Cloud Platform) #Kubernetes #Airflow #Data Engineering #IAM (Identity and Access Management) #AWS (Amazon Web Services) #Docker #MLflow #Scala #Spark (Apache Spark) #Azure Databricks #SageMaker #Databricks #ML (Machine Learning) #Azure #Apache Spark #Deployment
Role description
π¨ Hiring Now: ML Engineer with Databricks
π Location: Winston-Salem, NC (Onsite | In-Person Interview Required)
π Duration: Long-term project with a leading enterprise
Weβre looking for a Databricks & MLOps Engineer with 8+ years of experience in machine learning operations, model lifecycle management, and cloud-based data platforms. The ideal candidate has deep hands-on experience with Databricks, MLflow, CI/CD, and orchestration toolsβand is comfortable working across Azure, AWS, or GCP.
π§ Key Responsibilities:
Databricks ML Platform Development
β’ Build scalable ML pipelines using MLflow, Delta Lake, and Feature Store
β’ Optimize model training, versioning, and deployment via Databricks Jobs & Workflows
β’ Create reusable notebooks and libraries for training, testing, and inference
MLOps & Model Deployment
β’ Implement CI/CD pipelines with Databricks Repos, GitHub Actions, Jenkins, or Azure DevOps
β’ Automate deployments via MLflow Model Registry, REST APIs, or Databricks Model Serving
β’ Monitor model drift, performance, and retraining workflows
Cloud & Infrastructure Management
β’ Deploy solutions in Azure (Databricks, AKS), AWS (SageMaker, EMR), or GCP (Vertex AI, GKE)
β’ Containerize ML workloads using Docker and Kubernetes
β’ Manage IAM roles, security policies, and cross-cloud access
Orchestration & Data Pipelines
β’ Migrate ML workflows from Airflow, Cloud Composer, or Step Functions to Databricks Jobs
β’ Integrate with data engineering pipelines on Delta Lake and Apache Spark
Monitoring & Observability
β’ Track data & model lineage with Unity Catalog and MLflow
β’ Automate alerts for model failure, drift, and cost performance
β
Tech Stack & Skills:
β’ Python, SQL, PySpark
β’ Databricks, MLflow, Delta Lake
β’ Airflow, MLOps, CI/CD
β’ Azure, AWS, GCP, Docker, Kubernetes
β’ Vertex AI, SageMaker, AKS
π© Interested or know someone perfect for this role?
DM me or send your resume to venkat@staffworxs.com
π¨ Hiring Now: ML Engineer with Databricks
π Location: Winston-Salem, NC (Onsite | In-Person Interview Required)
π Duration: Long-term project with a leading enterprise
Weβre looking for a Databricks & MLOps Engineer with 8+ years of experience in machine learning operations, model lifecycle management, and cloud-based data platforms. The ideal candidate has deep hands-on experience with Databricks, MLflow, CI/CD, and orchestration toolsβand is comfortable working across Azure, AWS, or GCP.
π§ Key Responsibilities:
Databricks ML Platform Development
β’ Build scalable ML pipelines using MLflow, Delta Lake, and Feature Store
β’ Optimize model training, versioning, and deployment via Databricks Jobs & Workflows
β’ Create reusable notebooks and libraries for training, testing, and inference
MLOps & Model Deployment
β’ Implement CI/CD pipelines with Databricks Repos, GitHub Actions, Jenkins, or Azure DevOps
β’ Automate deployments via MLflow Model Registry, REST APIs, or Databricks Model Serving
β’ Monitor model drift, performance, and retraining workflows
Cloud & Infrastructure Management
β’ Deploy solutions in Azure (Databricks, AKS), AWS (SageMaker, EMR), or GCP (Vertex AI, GKE)
β’ Containerize ML workloads using Docker and Kubernetes
β’ Manage IAM roles, security policies, and cross-cloud access
Orchestration & Data Pipelines
β’ Migrate ML workflows from Airflow, Cloud Composer, or Step Functions to Databricks Jobs
β’ Integrate with data engineering pipelines on Delta Lake and Apache Spark
Monitoring & Observability
β’ Track data & model lineage with Unity Catalog and MLflow
β’ Automate alerts for model failure, drift, and cost performance
β
Tech Stack & Skills:
β’ Python, SQL, PySpark
β’ Databricks, MLflow, Delta Lake
β’ Airflow, MLOps, CI/CD
β’ Azure, AWS, GCP, Docker, Kubernetes
β’ Vertex AI, SageMaker, AKS
π© Interested or know someone perfect for this role?
DM me or send your resume to venkat@staffworxs.com