

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (AI/ML) with a contract length of "unknown" and a pay rate of "unknown", located in a "remote" setting. Key skills include GCP Vertex AI, MLOps, and CI/CD for ML. Requires 5+ years in data engineering and a Bachelor's degree.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 30, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Dataflow #Batch #Cloud #TensorFlow #Terraform #Prometheus #Model Deployment #Data Science #Observability #"ETL (Extract #Transform #Load)" #Deployment #Automation #ML (Machine Learning) #BigQuery #GCP (Google Cloud Platform) #Data Engineering #Computer Science #Python #Scala #Data Ingestion #Docker #Grafana #DevOps #MLflow #Monitoring #AI (Artificial Intelligence) #Agile
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We are seeking an experienced Senior Data Engineer (AI/ML) to lead the integration, deployment, and operationalization of machine learning models in a production-grade cloud environment. The ideal candidate brings deep technical expertise in Google Cloud Platform (GCP) with hands-on experience using Vertex AI, MLOps best practices, and infrastructure-as-code to deliver scalable, secure, and highly available ML solutions.
Day-to-Day Responsibilities
β’ Design, build, and optimize scalable ML inference pipelines using Vertex AI Pipelines, Model Endpoints, Model Monitoring, and Feature Store.
β’ Collaborate cross-functionally with data scientists, software engineers, and DevOps to deploy and monitor ML models in production.
β’ Automate the ML lifecycle, including CI/CD for ML, model retraining, versioning, shadow testing, and rollback strategies.
β’ Implement model observability with tools such as Stackdriver, Prometheus, or Grafana.
β’ Ensure seamless data ingestion, transformation, and curation pipelines from diverse data sources (DBMS, APIs, streaming, file systems).
β’ Translate complex business needs into scalable data and ML solutions, aligning with architectural best practices.
β’ Champion infrastructure automation via Terraform, Docker, and GCP Deployment Manager.
Qualifications
β’ Bachelorβs degree in Computer Science, Engineering, or related field (or equivalent practical experience).
β’ 5+ years in data engineering, data warehousing, or software engineering.
β’ 4+ years of experience implementing full software development lifecycle (SDLC) practices in Agile environments.
MUST HAVE Skills (Non-Negotiable)
β’ 2+ years of hands-on experience with GCP Vertex AI, including Vertex Pipelines, Model Endpoints, Monitoring, and Feature Store.
β’ Proven track record scaling ML models in production (real-time + batch), optimizing performance, cost, and throughput.
β’ Strong knowledge of MLOps, CI/CD for ML, and deployment in containerized environments.
Preferred Skills
β’ Familiarity with Kubeflow, MLFlow, or TensorFlow Extended (TFX) for advanced MLOps.
β’ Experience with hybrid/federated model deployments.
β’ Skilled in monitoring/alerting frameworks (e.g., Stackdriver, Prometheus, Grafana).
β’ Proficiency in BigQuery, Dataflow, Cloud Functions, and Python for data/ML integration and automation.