Turing

Gen AI Engineer - 57001

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Gen AI Engineer, a long-term contract position based in Richardson, TX, offering competitive pay. Key requirements include 8 years of software engineering experience, 3 years in AI/ML, proficiency in Python, and expertise with LLM frameworks.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 5, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Austin, TX
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #GCP (Google Cloud Platform) #Automation #GIT #Langchain #"ETL (Extract #Transform #Load)" #Python #DevOps #ML (Machine Learning) #Scala #Cloud #Deployment
Role description
About Us: Based in San Francisco, California, Turing is the world’s leading research accelerator for frontier AI labs and a trusted partner for global enterprises deploying advanced AI systems. Turing supports customers in two ways: first, by accelerating frontier research with high-quality data, advanced training pipelines, plus top AI researchers who specialize in software engineering, logical reasoning, STEM, multilinguality, multimodality, and agents; and second, by applying that expertise to help enterprises transform AI from proof of concept into proprietary intelligence with systems that perform reliably, deliver measurable impact, and drive lasting results on the P&L. Project Overview: As a Generative AI Engineer, you’ll be a core member of this pod, building and integrating agentic systems powered by cutting-edge LLM and GenAI technologies. You’ll work closely with Tech Leads and Full Stack Engineers to turn AI capabilities into production-ready enterprise solutions. What Does a Typical Day Look Like? β€’ Design, develop, and deploy agentic AI systems leveraging LLMs and modern AI frameworks. β€’ Integrate GenAI models into full-stack applications and internal workflows. β€’ Collaborate on prompt engineering, model fine-tuning, and evaluation of generative outputs. β€’ Build reusable components and services for multi-agent orchestration and task automation. β€’ Optimize AI inference pipelines for scalability, latency, and cost efficiency. β€’ Participate in architectural discussions, contributing to the pod’s technical roadmap. Required Skills: β€’ 8 years of software engineering experience with at least 3 years in AI/ML or GenAI systems in production β€’ Hands-on experience with Python only for AI/ML model integration. β€’ Experience with LLM frameworks (LangChain, LlamaIndex is a must) β€’ Exposure to agentic frameworks (Langgraph, Google ADK, is a must) β€’ Understanding of Git, CI/CD, DevOps, and production-grade GenAI deployment practices. β€’ Familiarity with Google Cloud Platform (GCP) β€” e.g. Vertex AI, Cloud Run, and GKE. Engagement Details: Commitment: Onsite (3 days) at Richardson, TX Type: Contractor and Fulltime Duration: Long term Interview Process: AI Assessment β†’ Technical/Delivery interview β†’ Optional 30-min client callβ†’ In-person (TX) After applying, you will receive an email with a login link. Please use that link to access the portal and complete your profile. Know amazing talent? Refer them at turing.com/referrals, and earn money from your network.