

AI/ML Engineer with 12+ Years (day 1 Onsite)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI/ML Engineer with 12+ years of experience, focusing on Python, SQL, Docker, Kubernetes, and MLOps. Contract length is unspecified, with a pay rate of "TBD." Requires expertise in scalable data pipelines and deploying ML models.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 19, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Phoenix, AZ
-
π§ - Skills detailed
#Data Pipeline #Spark (Apache Spark) #AI (Artificial Intelligence) #Programming #Data Science #Kubernetes #Public Cloud #GCP (Google Cloud Platform) #Flask #API (Application Programming Interface) #Docker #TensorFlow #Data Ingestion #PyTorch #Storage #Langchain #Scala #ML (Machine Learning) #Neural Networks #Cloud #Python #SQL (Structured Query Language)
Role description
Tech Stack - Python, SQL, Docker & Kubernetes, Fast API, Flask, MLOps, Machine Learning, LLMβs, LangChain or similar orchestrator, Vector DB or similar, GCP, Google AutoML, Vertex AI & Build tools
Need to have experience in coding
JOB SPECIFICATIONS:
Overall experience of at least 12+ years is needed
Design and implement scalable MLOps supportive data pipelines for data ingestion, processing, and storage.
Experience deploying models with MLOps tools such as Vertex Pipelines, Kubeflow, or similar platforms.
Experience implementing and supporting end-to-end Machine Learning workflows and patterns.
Expert level programming skills in Python and experience with Data Science and ML packages and frameworks.
Proficiency with containerization technologies (Docker, Kubernetes) and CI/CD practices.
Experience working with large-scale machine learning frameworks such as TensorFlow, Caffe2, PyTorch, Spark ML, or related frameworks.
Experience and knowledge in the most recent advancements in Gen AI, including Gemini, OpenAI, Claud and exposure to open-source Large Language Models (LLMs).
Experience building AI/ML products using technologies such as LLMs, neural networks and others.
Experience with RAG and Supervised Tuning techniques.
Strong distributed systems skills and knowledge.
Development experience of at least one public cloud provider, Preferably GCP.
Excellent analytical, written, and verbal communication skills.
Tech Stack - Python, SQL, Docker & Kubernetes, Fast API, Flask, MLOps, Machine Learning, LLMβs, LangChain or similar orchestrator, Vector DB or similar, GCP, Google AutoML, Vertex AI & Build tools
Need to have experience in coding
JOB SPECIFICATIONS:
Overall experience of at least 12+ years is needed
Design and implement scalable MLOps supportive data pipelines for data ingestion, processing, and storage.
Experience deploying models with MLOps tools such as Vertex Pipelines, Kubeflow, or similar platforms.
Experience implementing and supporting end-to-end Machine Learning workflows and patterns.
Expert level programming skills in Python and experience with Data Science and ML packages and frameworks.
Proficiency with containerization technologies (Docker, Kubernetes) and CI/CD practices.
Experience working with large-scale machine learning frameworks such as TensorFlow, Caffe2, PyTorch, Spark ML, or related frameworks.
Experience and knowledge in the most recent advancements in Gen AI, including Gemini, OpenAI, Claud and exposure to open-source Large Language Models (LLMs).
Experience building AI/ML products using technologies such as LLMs, neural networks and others.
Experience with RAG and Supervised Tuning techniques.
Strong distributed systems skills and knowledge.
Development experience of at least one public cloud provider, Preferably GCP.
Excellent analytical, written, and verbal communication skills.