United Software Group Inc

Agentic AI Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Agentic AI Engineer in Dallas, TX, with a 12+ month contract. Requires 5+ years in software development, 3+ years in ML systems, and expertise in LLMs. Onsite work, local candidates only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 1, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Compliance #AI (Artificial Intelligence) #Model Deployment #AWS (Amazon Web Services) #Data Processing #"ETL (Extract #Transform #Load)" #Batch #Java #API (Application Programming Interface) #Lambda (AWS Lambda) #C++ #Statistics #DynamoDB #SageMaker #Deployment #ML (Machine Learning) #Cloud #Terraform #Python #S3 (Amazon Simple Storage Service) #Monitoring #Observability #Redshift
Role description
Job Title: Agentic AI Engineer Location : Dallas, TX Onsite Role –(5 Days Onsite Per Week)-Local Only Duration : 12+ Months Contract Teams Meeting Interview Job Description: Please note: 2-3 client interviews will be required, including a coding interview round. Kindly set the expectations upfront. Job Duties: In this role, you will be responsible for launching and implementing GenAI agentic solutions aimed at reducing the risk and cost of managing large-scale production environments with varying complexities. You will address various production runtime challenges by developing agentic AI solutions that can diagnose, reason, and take actions in production environments to improve productivity and address issues related to production support. What you’ll do: Build agentic AI systems: Design and implement tool-calling agents that combine retrieval, structured reasoning, and secure action execution (function calling, change orchestration, policy enforcement) following MCP protocol. Engineer robust guardrails for safety, compliance, and least-privilege access. Productionize LLMs: Build evaluation framework for open-source and foundational LLMs; implement retrieval pipelines, prompt synthesis, response validation, and self-correction loops tailored to production operations. Integrate with runtime ecosystems: Connect agents to observability, incident management, and deployment systems to enable automated diagnostics, runbook execution, remediation, and post-incident summarization with full traceability. Collaborate directly with users: Partner with production engineers, and application teams to translate production pain points into agentic AI roadmaps; define objective functions linked to reliability, risk reduction, and cost; and deliver auditable, business-aligned outcomes. Safety, reliability, and governance: Build validator models, adversarial prompts, and policy checks into the stack; enforce deterministic fallbacks, circuit breakers, and rollback strategies; instrument continuous evaluations for usefulness, correctness, and risk. Scale and performance: Optimize cost and latency via prompt engineering, context management, caching, model routing, and distillation; leverage batching, streaming, and parallel tool-calls to meet stringent SLOs under real-world load. Build a RAG pipeline: Curate domain-knowledge; build data-quality validation framework; establish feedback loops and milestone framework maintain knowledge freshness. Raise the bar: Drive design reviews, experiment rigor, and high-quality engineering practices; mentor peers on agent architectures, evaluation methodologies, and safe deployment patterns. Minimum Skills Required: • 5+ years of software development in one or more languages (Python, C/C++, Go, Java); strong hands-on experience building and maintaining large-scale Python applications preferred.3+ years designing, architecting, testing, and launching production ML systems, including model deployment/serving, evaluation and monitoring, data processing pipelines, and model fine-tuning workflows. • Practical experience with Large Language Models (LLMs): API integration, prompt engineering, fine-tuning/adaptation, and building applications using RAG and tool-using agents (vector retrieval, function calling, secure tool execution). • Understanding of different LLMs, both commercial and open source, and their capabilities (e.g., OpenAI, Gemini, Llama, Qwen, Claude). • Solid grasp of applied statistics, core ML concepts, algorithms, and data structures to deliver efficient and reliable solutions. • Strong analytical problem-solving, ownership, and urgency; ability to communicate complex ideas simply and collaborate effectively across global teams with a focus on measurable business impact. • Preferred: Proficiency building and operating on cloud infrastructure (ideally AWS), including containerized services (ECS/EKS), serverless (Lambda), data services (S3, DynamoDB, Redshift), orchestration (Step Functions), model serving (SageMaker), and infra-as-code (Terraform/CloudFormation).?