

Talent Groups
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Data Science Engineer III) with a 6+ month contract, paying competitively. Remote work is available (EST). Key skills include GCP, Vertex AI, MLOps, Python, and experience in deploying ML models. Preferred certifications include Google Cloud Professional Machine Learning Engineer.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 9, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Scala #Docker #GIT #Documentation #Data Engineering #Computer Science #Deployment #Java #Kubernetes #BigQuery #GitHub #Batch #GCP (Google Cloud Platform) #Model Deployment #AI (Artificial Intelligence) #Data Science #Monitoring #Python #Cloud #Agile #Programming #IAM (Identity and Access Management) #TensorFlow #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #C++ #Data Processing #Version Control #PyTorch
Role description
Role: Data Science Engineer III
Location: Virtual (EST)
Hours: 40 hours/week (No OT or weekend work expected)
Contract : 6+ months
Role Overview
As a Senior Data Science Engineer, you will play a key role in designing, developing, and implementing data science and machine learning solutions on Google Cloud Platform (GCP). Youβll work collaboratively with data scientists, data engineers, and business stakeholders to build scalable ML systems that drive data-driven decision-making. This role requires strong hands-on expertise in Vertex AI, MLOps, and cloud-based model deployment.
Key Responsibilities
Initial Setup and Assessment
β’ Collaborate with data scientists, engineers, and stakeholders to understand business goals.
β’ Review current ML deployment and infrastructure to identify optimization opportunities.
β’ Configure GCP Vertex AI environments, IAM roles, and related tools.
β’ Define and document standards for model deployment, versioning, and monitoring.
Development and Implementation
β’ Build scalable ML training and inference pipelines (batch & real-time) using Vertex AI.
β’ Automate feature engineering and data preprocessing workflows.
β’ Design CI/CD pipelines integrating testing, validation, and deployment.
β’ Containerize models using Docker and deploy on Kubernetes Engine or Cloud Run.
β’ Implement robust deployment strategies including rolling updates and rollback mechanisms.
Testing, Optimization & Monitoring
β’ Conduct testing of ML training and prediction pipelines, including load and stress testing.
β’ Optimize model performance, cost, and scalability.
β’ Implement Vertex AI Model Monitoring for real-time model health tracking and alerts.
Documentation & Continuous Improvement
β’ Document all processes, architectures, and established standards.
β’ Lead knowledge-transfer sessions and internal workshops.
β’ Gather stakeholder feedback and provide improvement recommendations.
Qualifications
Education
β’ Bachelorβs or Masterβs in Computer Science, Data Science, Machine Learning, or related discipline.
Technical Skills
β’ Programming: Proficient in Python; experience with Java, Node.js, or C++ a plus.
β’ ML Frameworks: Skilled with Scikit-learn, TensorFlow, PyTorch, or similar tools.
β’ Cloud Expertise: Strong experience with GCP, especially Vertex AI.
β’ MLOps Tools: Hands-on with Docker, Kubernetes, CI/CD pipelines, and orchestration workflows.
β’ Data Engineering: Experience with BigQuery and data processing frameworks.
β’ Version Control: Proficient with Git/GitHub workflows.
Experience
β’ Proven success in deploying ML models to production (batch and online).
β’ Strong background in building and maintaining ML pipelines.
β’ Ability to define and implement MLOps best practices.
β’ Experience working in Agile and cross-functional team environments.
Preferred Certifications
β’ Google Cloud Professional Machine Learning Engineer
β’ Google Cloud Professional Data Engineer
β’ Google Cloud Professional Cloud Architect
Role: Data Science Engineer III
Location: Virtual (EST)
Hours: 40 hours/week (No OT or weekend work expected)
Contract : 6+ months
Role Overview
As a Senior Data Science Engineer, you will play a key role in designing, developing, and implementing data science and machine learning solutions on Google Cloud Platform (GCP). Youβll work collaboratively with data scientists, data engineers, and business stakeholders to build scalable ML systems that drive data-driven decision-making. This role requires strong hands-on expertise in Vertex AI, MLOps, and cloud-based model deployment.
Key Responsibilities
Initial Setup and Assessment
β’ Collaborate with data scientists, engineers, and stakeholders to understand business goals.
β’ Review current ML deployment and infrastructure to identify optimization opportunities.
β’ Configure GCP Vertex AI environments, IAM roles, and related tools.
β’ Define and document standards for model deployment, versioning, and monitoring.
Development and Implementation
β’ Build scalable ML training and inference pipelines (batch & real-time) using Vertex AI.
β’ Automate feature engineering and data preprocessing workflows.
β’ Design CI/CD pipelines integrating testing, validation, and deployment.
β’ Containerize models using Docker and deploy on Kubernetes Engine or Cloud Run.
β’ Implement robust deployment strategies including rolling updates and rollback mechanisms.
Testing, Optimization & Monitoring
β’ Conduct testing of ML training and prediction pipelines, including load and stress testing.
β’ Optimize model performance, cost, and scalability.
β’ Implement Vertex AI Model Monitoring for real-time model health tracking and alerts.
Documentation & Continuous Improvement
β’ Document all processes, architectures, and established standards.
β’ Lead knowledge-transfer sessions and internal workshops.
β’ Gather stakeholder feedback and provide improvement recommendations.
Qualifications
Education
β’ Bachelorβs or Masterβs in Computer Science, Data Science, Machine Learning, or related discipline.
Technical Skills
β’ Programming: Proficient in Python; experience with Java, Node.js, or C++ a plus.
β’ ML Frameworks: Skilled with Scikit-learn, TensorFlow, PyTorch, or similar tools.
β’ Cloud Expertise: Strong experience with GCP, especially Vertex AI.
β’ MLOps Tools: Hands-on with Docker, Kubernetes, CI/CD pipelines, and orchestration workflows.
β’ Data Engineering: Experience with BigQuery and data processing frameworks.
β’ Version Control: Proficient with Git/GitHub workflows.
Experience
β’ Proven success in deploying ML models to production (batch and online).
β’ Strong background in building and maintaining ML pipelines.
β’ Ability to define and implement MLOps best practices.
β’ Experience working in Agile and cross-functional team environments.
Preferred Certifications
β’ Google Cloud Professional Machine Learning Engineer
β’ Google Cloud Professional Data Engineer
β’ Google Cloud Professional Cloud Architect