TalentOla

GenAI

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a GCP GenAI Platform Engineer in Charlotte, NC, offering a W2 contract for 10 years of experience at a competitive pay rate. Key skills include GCP AI/ML, Terraform, GKE, and Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 13, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Deployment #Cloud #Compliance #ML (Machine Learning) #Leadership #Scripting #Scala #YAML (YAML Ain't Markup Language) #Kubernetes #Microsoft Azure #Strategy #GCP (Google Cloud Platform) #Azure #Microservices #Terraform #Python #Docker #AI (Artificial Intelligence) #Security #Data Science #Automation
Role description
GCP GenAI Platform Engineer (Onsite role-5days)- W2 Only/No C2C Location: Charlotte NC (Need local candidate only) Experience Level: 10 years Job Summary We are seeking a skilled GCP GenAI Platform Engineer to design, deploy, and manage next-generation Generative AI (GenAI) solutions on Google Cloud Platform. The ideal candidate will have hands-on experience with GCP AI/ML services, Guardrails (Model Armor/SDP), Terraform, and GKE, with a strong understanding of GenAI architectures and deployment best practices. Key Responsibilities • Design and deploy GenAI workloads and LLM-based applications on GCP. • Implement GCP Guardrails (Model Armor/SDP) for secure and compliant AI operations. • Automate infrastructure provisioning and configuration using Terraform. • Manage and scale GenAI models on Google Kubernetes Engine (GKE). • Collaborate with data scientists and ML engineers to operationalize LLMs. • Monitor performance, optimize cost, and ensure high availability of AI services. • Troubleshoot deployment issues, analyze logs, and enhance platform reliability. Required Skills & Qualifications • Strong experience with GCP AI/ML stack, Vertex AI, and GenAI APIs. • Working knowledge of Terraform and GKE. • Proficiency in Python for scripting and automation. • Understanding of GenAI concepts, prompt engineering, and LLM architectures. • Familiarity with security, compliance, and governance in AI workloads. • Excellent problem-solving and collaboration skills. 2️⃣ Azure GenAI Platform Engineer Job Summary We are looking for an experienced Azure GenAI Platform Engineer to build and manage AI/LLM-based applications on the Microsoft Azure GenAI ecosystem. The ideal candidate should be proficient with Azure OpenAI Service, Azure Guardrails (Content Filtering, Purview, etc.), Terraform, and Kubernetes, with a deep understanding of GenAI concepts and enterprise-scale deployment practices. Key Responsibilities • Architect, deploy, and manage GenAI workloads using Azure OpenAI, Cognitive Services, and related tools. • Implement Azure Guardrails for responsible AI usage (Content Filtering, Purview). • Automate infrastructure setup using Terraform. • Deploy and manage AI microservices on Azure Kubernetes Service (AKS) or GKE. • Integrate LLM endpoints with enterprise applications securely. • Work closely with leadership and product teams to design scalable, production-ready AI solutions. Required Skills & Qualifications • Hands-on experience with Azure OpenAI Service, Azure ML, and Azure AI Studio. • Proficiency in Terraform, Python, and containerized deployments (Docker, AKS/GKE). • Strong understanding of LLMs, prompt engineering, and GenAI system design. • Familiarity with Azure Guardrails (Purview, Content Filtering). • Excellent communication skills to work with technical and business stakeholders. 3️⃣ OCP GenAI Platform Engineer (Red Hat OpenShift) Job Summary We are seeking an experienced OCP GenAI Platform Engineer with strong expertise in Red Hat OpenShift (OCP) and LLM deployment. The ideal candidate will have deep hands-on experience with vLLM inferencing engines, GPU-based workloads, and end-to-end deployment of large language models (LLMs). The role requires someone who can operate independently, troubleshoot complex deployments, and collaborate directly with leadership on strategic AI initiatives. Key Responsibilities • Deploy and manage GenAI and LLM workloads on Red Hat OpenShift Platform. • Configure and optimize vLLM inferencing engines for GPU-based model serving. • Manage deployment of LLM endpoints, model tuning, and performance optimization. • Develop and debug deployment manifests (YAML files) and infrastructure configurations. • Ensure stability, scalability, and security of GenAI workloads on OCP. • Collaborate with AI architects and leadership to drive AI platform strategy and adoption. Required Skills & Qualifications • Strong experience with Red Hat OpenShift Container Platform (OCP). • Deep understanding of LLM internals, inference optimization, and GPU utilization. • Hands-on experience with vLLM, Python scripting, and YAML-based deployments. • Ability to troubleshoot OCP issues independently. • Strong collaboration and leadership communication skills.