

Machine Learning Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Machine Learning Engineer with a contract length of "unknown," offering a pay rate of "$X per hour." Key skills include Python, MLOps tools, containerization (Docker, Kubernetes), and experience with large-scale ML frameworks.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
May 17, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Phoenix, AZ
🧠 - Skills detailed
#Data Science #ML (Machine Learning) #Neural Networks #AI (Artificial Intelligence) #Cloud #Data Pipeline #Storage #Public Cloud #PyTorch #GCP (Google Cloud Platform) #Data Ingestion #Programming #Docker #Kubernetes #Langchain #Scala #TensorFlow #Spark (Apache Spark) #Python
Role description
• Design and implement scalable MLOps supportive data pipelines for data ingestion, processing, and storage.
• Experience deploying models with MLOps tools such as Vertex Pipelines, Kubeflow, or similar platforms along with Vertex AI.
• Experience implementing and supporting end-to-end Machine Learning workflows and patterns. Experience with LangChain or similar orchestrator, Vector DB or similar
• Expert level programming skills in Python and experience with Data Science and ML packages and frameworks.
• Proficiency with containerization technologies (Docker, Kubernetes) and CI/CD practices.
• Experience working with large-scale machine learning frameworks such as TensorFlow, Caffe2, PyTorch, Spark ML, or related frameworks.
• Experience and knowledge in the most recent advancements in Gen AI, including Gemini, OpenAI, Claud and exposure to open-source Large Language Models (LLMs).
• Experience building AI/ML products using technologies such as LLMs, neural networks and others.
• Experience with RAG and Supervised Tuning techniques.
• Strong distributed systems skills and knowledge.
• Development experience of at least one public cloud provider, Preferably GCP, Google AutoML.
• Excellent analytical, written, and verbal communication skills.
• Design and implement scalable MLOps supportive data pipelines for data ingestion, processing, and storage.
• Experience deploying models with MLOps tools such as Vertex Pipelines, Kubeflow, or similar platforms along with Vertex AI.
• Experience implementing and supporting end-to-end Machine Learning workflows and patterns. Experience with LangChain or similar orchestrator, Vector DB or similar
• Expert level programming skills in Python and experience with Data Science and ML packages and frameworks.
• Proficiency with containerization technologies (Docker, Kubernetes) and CI/CD practices.
• Experience working with large-scale machine learning frameworks such as TensorFlow, Caffe2, PyTorch, Spark ML, or related frameworks.
• Experience and knowledge in the most recent advancements in Gen AI, including Gemini, OpenAI, Claud and exposure to open-source Large Language Models (LLMs).
• Experience building AI/ML products using technologies such as LLMs, neural networks and others.
• Experience with RAG and Supervised Tuning techniques.
• Strong distributed systems skills and knowledge.
• Development experience of at least one public cloud provider, Preferably GCP, Google AutoML.
• Excellent analytical, written, and verbal communication skills.