

Infomatics Corp
Forward Deployed AI Engineer – System Test Automation
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Forward Deployed AI Engineer – System Test Automation, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include Python programming, AI deployment, and automation experience. Preferred qualifications involve networking fundamentals and familiarity with Linux, CI/CD, Docker, and Kubernetes.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 24, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Bethesda, MD
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Documentation #IP (Internet Protocol) #Leadership #Docker #Python #Programming #Linux #Automation #Deployment #Kubernetes
Role description
We are seeking a Forward Deployed AI Engineer (FDE) to accelerate the real-world adoption of AI within our End-to-End System Test Automation. This role focuses on taking AI tools and prototypes and making them work reliably inside complex engineering environments. You will embed closely with system test engineers, automation framework/ test owners and development teams to adapt, harden and operationalize AI-driven workflows. This is a hands-on engineering role for individuals who thrive in ambiguity, enjoy working close to real users and can turn AI potential into measurable productivity gains.
Key Responsibilities:
Forward Deployment of AI into Test Automation and turn AI prototypes into production-ready, repeatable solutions
Embed with system test and automation teams to:
Understand real testing workflows, constraints, and failure modes
Identify where AI can safely and effectively reduce manual effort
Adapt AI workflows to work with:
Existing automation frameworks
Real test data, schemas, and configurations
Agentic AI Implementation & Tuning
Implement and customize agentic AI workflows that:
Interpret requirements, schemas, or models
Assist in generating structured test assets
Tune AI behavior based on:
Real test outcomes
Failure analysis and feedback
Debug and resolve AI issues in live engineering environments
AI Tooling & Platform Feedback
Work with AI platforms (e.g., Qodo or similar) to:
Extend functionality where needed
Configure prompts, workflows, and validation layers
Evaluate emerging AI tools and frameworks in real system test contexts
Feed practical insights back to platform and leadership teams:
What works
What fails
What should scale
Enablement, Documentation & Guardrails
Document AI usage patterns and best practices
Help define guardrails for:
Safety-critical configurations
Invalid or destructive AI-generated outputs
Enable test engineers to adopt AI confidently and responsibly
Required Qualifications
• 2+ years of hands-on experience in software engineering or applied AI, with clear ownership of deploying solutions into real engineering or production environments
• Demonstrated experience taking an AI or automation-based solution from prototype to real-world usage, including adapting it to real constraints, failures, and evolving requirements
• Proven ability to own and troubleshoot AI behavior, tuning prompts/instructions, adding validation or guardrails, and diagnosing incorrect, unsafe, or low-quality outputs
• Experience working closely with engineers or end users to understand real workflows and iterating rapidly based on live feedback and outcomes
• Strong hands-on programming skills (Python required), with experience building orchestration, integration, or control logic across multiple systems
Preferred Qualifications
• Networking fundamentals (TCP/IP, MPLS, gNMI, YANG)
• Linux, CI/CD, Docker, Kubernetes
We are seeking a Forward Deployed AI Engineer (FDE) to accelerate the real-world adoption of AI within our End-to-End System Test Automation. This role focuses on taking AI tools and prototypes and making them work reliably inside complex engineering environments. You will embed closely with system test engineers, automation framework/ test owners and development teams to adapt, harden and operationalize AI-driven workflows. This is a hands-on engineering role for individuals who thrive in ambiguity, enjoy working close to real users and can turn AI potential into measurable productivity gains.
Key Responsibilities:
Forward Deployment of AI into Test Automation and turn AI prototypes into production-ready, repeatable solutions
Embed with system test and automation teams to:
Understand real testing workflows, constraints, and failure modes
Identify where AI can safely and effectively reduce manual effort
Adapt AI workflows to work with:
Existing automation frameworks
Real test data, schemas, and configurations
Agentic AI Implementation & Tuning
Implement and customize agentic AI workflows that:
Interpret requirements, schemas, or models
Assist in generating structured test assets
Tune AI behavior based on:
Real test outcomes
Failure analysis and feedback
Debug and resolve AI issues in live engineering environments
AI Tooling & Platform Feedback
Work with AI platforms (e.g., Qodo or similar) to:
Extend functionality where needed
Configure prompts, workflows, and validation layers
Evaluate emerging AI tools and frameworks in real system test contexts
Feed practical insights back to platform and leadership teams:
What works
What fails
What should scale
Enablement, Documentation & Guardrails
Document AI usage patterns and best practices
Help define guardrails for:
Safety-critical configurations
Invalid or destructive AI-generated outputs
Enable test engineers to adopt AI confidently and responsibly
Required Qualifications
• 2+ years of hands-on experience in software engineering or applied AI, with clear ownership of deploying solutions into real engineering or production environments
• Demonstrated experience taking an AI or automation-based solution from prototype to real-world usage, including adapting it to real constraints, failures, and evolving requirements
• Proven ability to own and troubleshoot AI behavior, tuning prompts/instructions, adding validation or guardrails, and diagnosing incorrect, unsafe, or low-quality outputs
• Experience working closely with engineers or end users to understand real workflows and iterating rapidly based on live feedback and outcomes
• Strong hands-on programming skills (Python required), with experience building orchestration, integration, or control logic across multiple systems
Preferred Qualifications
• Networking fundamentals (TCP/IP, MPLS, gNMI, YANG)
• Linux, CI/CD, Docker, Kubernetes






