Wells Fargo

Lead Gen AI Engineer (contract)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Gen AI Engineer, a 12-month onsite contract in Charlotte, NC, offering a competitive pay rate. Key skills include 5+ years in Gen AI, Python, SQL, and experience with Google ADK and LangChain.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
May 15, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Monitoring #dbt (data build tool) #Airflow #SQL (Structured Query Language) #Databases #Snowflake #Data Engineering #Langchain #Data Access #Data Quality #Data Governance #Compliance #Observability #BigQuery #Security #SQL Queries #AWS (Amazon Web Services) #Cloud #Azure #"ETL (Extract #Transform #Load)" #Scala #Kafka (Apache Kafka) #Data Pipeline #Batch #AI (Artificial Intelligence) #Python #GCP (Google Cloud Platform) #Logging #Data Ingestion
Role description
Title: Lead Gen AI & Data Engineer Location: 1525 W W T Harris Blvd Charlotte, NC Duration: 12 months Work Engagement: W-2 Work Schedule: Onsite Benefits on offer for this contract position: Health Insurance, Life insurance, 401K and Voluntary Benefits Summary: We are seeking a hands-on Lead GenAI Agentic AI Engineer to design and build intelligent, enterprise-grade AI agents using Google ADK and LangChain / LangGraph. This role combines advanced LLM application development with strong data engineering expertise, enabling agents to seamlessly interact with multiple Systems of Record (SoRs) through robust data pipelines and integrations. The ideal candidate will own the end-to-end lifecycleβ€”from designing agent workflows and integrating enterprise data, to deploying scalable, production-ready solutions with strong observability, governance, and performance optimization. Responsibilities: β€’ Design and develop Agentic AI systems using Google ADK, LangChain, and LangGraph, including multi-agent orchestration, state management, and tool integration leveraging enterprise approved LLMs. β€’ Integrate agents with enterprise Systems of Record (SoRs) by building reliable data pipelines, APIs, and connectors across structured and unstructured sources. β€’ Integrate organization-approved foundation models (like Anthropic, Google Gemini etc.) into Agentic task-based workflows. β€’ Partner with Process Excellence and Ops teams to ideate and implement AI copilots and AI Agents for business functions. β€’ Develop scalable Python-based services for agent workflows, incorporating RAG, tool calling, memory, and structured outputs. β€’ Engineer data ingestion and transformation pipelines (batch/streaming) to enable high-quality, governed data access for AI agents. β€’ Write and optimize complex SQL queries for analytics, feature extraction, and real-time agent decisioning. β€’ Implement observability, evaluation, and guardrails across both data and AI layers, ensuring performance, quality, compliance, and cost efficiency. β€’ Use SQL, Python, and cloud-native tools (GCP, Azure, or AWS) to ensure data quality and lineage. Qualifications: β€’ Applicants must be authorized to work for ANY employer in the U.S. This position is not eligible for visa sponsorship. β€’ 5+ years in Gen AI, AI Data Engineering, and Agentic AI-focused roles. β€’ Advanced Prompt Engineering, Context Engineering skills, Python skills. β€’ Hands-on experience building agentic AI solutions using Google ADK + LangChain/LangGraph, including orchestration and tool usage patterns. β€’ Strong Python development skills for backend services, workflow engines, and AI pipelines. β€’ Solid data engineering expertise: β€’ Building ETL/ELT pipelines β€’ Integrating data from multiple SoRs (APIs, DBs, files, streams) β€’ Working with data quality, schema evolution, and lineage β€’ Advanced SQL proficiency (complex joins, window functions, query optimization). β€’ Experience with RAG architectures and integrating LLMs with enterprise data sources (vector stores + relational systems). β€’ Production-grade engineering practices: testing, CI/CD, logging, monitoring, and error handling. β€’ Desired Skills β€’ Experience with modern data stack tools (e.g., dbt, Airflow/Composer, Kafka/PubSub, BigQuery/Snowflake). β€’ Familiarity with vector databases and hybrid retrieval strategies. β€’ Experience deploying solutions on GCP (preferred) or other cloud platforms with scalable architectures. β€’ Knowledge of data governance, security, and PII handling in AI/data pipelines. β€’ Exposure to LLMOps frameworks (evaluation, prompt/version management, tracing, cost optimization). β€’ Experience implementing guardrails and safety controls for enterprise AI agents.