

Turing
Gen AI Engineer - 57001
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Gen AI Engineer with 8 years of software engineering experience, including 3 years in AI/ML. Requires expertise in Python, LLM frameworks, and Google Cloud Platform. Onsite in Richardson, TX for a long-term contract.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 28, 2026
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#AI (Artificial Intelligence) #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Langchain #DevOps #Scala #GIT #Cloud #Python #Automation #ML (Machine Learning) #Deployment
Role description
About Us:
Based in San Francisco, California, Turing is the worldβs leading research accelerator for frontier AI labs and a trusted partner for global enterprises deploying advanced AI systems. Turing supports customers in two ways: first, by accelerating frontier research with high-quality data, advanced training pipelines, plus top AI researchers who specialize in software engineering, logical reasoning, STEM, multilinguality, multimodality, and agents; and second, by applying that expertise to help enterprises transform AI from proof of concept into proprietary intelligence with systems that perform reliably, deliver measurable impact, and drive lasting results on the P&L.
Project Overview:
As a Generative AI Engineer, youβll be a core member of this pod, building and integrating agentic systems powered by cutting-edge LLM and GenAI technologies. Youβll work closely with Tech Leads and Full Stack Engineers to turn AI capabilities into production-ready enterprise solutions.
What Does a Typical Day Look Like?
β’ Design, develop, and deploy agentic AI systems leveraging LLMs and modern AI frameworks.
β’ Integrate GenAI models into full-stack applications and internal workflows.
β’ Collaborate on prompt engineering, model fine-tuning, and evaluation of generative outputs.
β’ Build reusable components and services for multi-agent orchestration and task automation.
β’ Optimize AI inference pipelines for scalability, latency, and cost efficiency.
β’ Participate in architectural discussions, contributing to the podβs technical roadmap.
Required Skills:
β’ 8 years of software engineering experience with at least 3 years in AI/ML or GenAI systems in production
β’ Hands-on experience with Python only for AI/ML model integration.
β’ Experience with LLM frameworks (LangChain, LlamaIndex is a must)
β’ Exposure to agentic frameworks (Langgraph, Google ADK, is a must)
β’ Understanding of Git, CI/CD, DevOps, and production-grade GenAI deployment practices.
β’ Familiarity with Google Cloud Platform (GCP) β e.g. Vertex AI, Cloud Run, and GKE.
Engagement Details:
Commitment: Onsite (3 days) at Richardson, TX
Type: Contractor and Fulltime
Duration: Long term
Interview Process: AI Assessment β Technical/Delivery interview β Optional 30-min client callβ In-person (TX)
After applying, you will receive an email with a login link. Please use that link to access the portal and complete your profile.
Know amazing talent? Refer them at turing.com/referrals, and earn money from your network.
About Us:
Based in San Francisco, California, Turing is the worldβs leading research accelerator for frontier AI labs and a trusted partner for global enterprises deploying advanced AI systems. Turing supports customers in two ways: first, by accelerating frontier research with high-quality data, advanced training pipelines, plus top AI researchers who specialize in software engineering, logical reasoning, STEM, multilinguality, multimodality, and agents; and second, by applying that expertise to help enterprises transform AI from proof of concept into proprietary intelligence with systems that perform reliably, deliver measurable impact, and drive lasting results on the P&L.
Project Overview:
As a Generative AI Engineer, youβll be a core member of this pod, building and integrating agentic systems powered by cutting-edge LLM and GenAI technologies. Youβll work closely with Tech Leads and Full Stack Engineers to turn AI capabilities into production-ready enterprise solutions.
What Does a Typical Day Look Like?
β’ Design, develop, and deploy agentic AI systems leveraging LLMs and modern AI frameworks.
β’ Integrate GenAI models into full-stack applications and internal workflows.
β’ Collaborate on prompt engineering, model fine-tuning, and evaluation of generative outputs.
β’ Build reusable components and services for multi-agent orchestration and task automation.
β’ Optimize AI inference pipelines for scalability, latency, and cost efficiency.
β’ Participate in architectural discussions, contributing to the podβs technical roadmap.
Required Skills:
β’ 8 years of software engineering experience with at least 3 years in AI/ML or GenAI systems in production
β’ Hands-on experience with Python only for AI/ML model integration.
β’ Experience with LLM frameworks (LangChain, LlamaIndex is a must)
β’ Exposure to agentic frameworks (Langgraph, Google ADK, is a must)
β’ Understanding of Git, CI/CD, DevOps, and production-grade GenAI deployment practices.
β’ Familiarity with Google Cloud Platform (GCP) β e.g. Vertex AI, Cloud Run, and GKE.
Engagement Details:
Commitment: Onsite (3 days) at Richardson, TX
Type: Contractor and Fulltime
Duration: Long term
Interview Process: AI Assessment β Technical/Delivery interview β Optional 30-min client callβ In-person (TX)
After applying, you will receive an email with a login link. Please use that link to access the portal and complete your profile.
Know amazing talent? Refer them at turing.com/referrals, and earn money from your network.






