

Synergy Technologies
Need Senior AI/GenAI/ ML Engineer with Python, Platform & Data Engineering, RAG, Infrastructure, DevOps Exp.:: Min.12+ Yrs Exp. Required:: Addison, TX / Charlotte, NC (Onsite)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AI/GenAI/ML Engineer with 12+ years of experience, based in Addison, TX or Charlotte, NC (onsite). Key skills include expert Python, data engineering, RAG expertise, and DevOps proficiency.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 11, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Addison, TX
-
π§ - Skills detailed
#Python #Data Engineering #Database Management #Libraries #Scala #DevOps #Jenkins #Azure #Deployment #AI (Artificial Intelligence) #Databases #ML (Machine Learning) #Cloud #AWS (Amazon Web Services) #Data Ingestion #Compliance #Ansible
Role description
Job Title: Senior AI/GenAI/ ML Engineer
Location: Addison, TX / Charlotte, NC (Onsite)
NOTE: Minimum Exp: 12+ Years required
Role Overview:
We are looking for an AI Platform Engineerβa builder who can architect the "factory" where AI is made. Our goal is to build an internal, on-premises AI ecosystem that mimics the capabilities of AWS or Azure. You will be responsible for creating a horizontal platform used by various lines of business to deploy AI projects simultaneously.
Key Responsibilities:
β’ Platform Architecture: Design and develop a "Model-as-a-Service" platform that allows non-experts to use drag-and-drop components to build AI solutions.
β’ RAG-as-a-Service: Build and optimize end-to-end Retrieval-Augmented Generation (RAG) pipelines, including sophisticated chunking strategies and vector database management.
β’ Tooling & Libraries: Develop and maintain MCP (Model Control Protocol) libraries, clients, and servers to connect various data sources to the AI engine.
β’ Infrastructure Management: Help manage and optimize one of the largest on-premise GPU farms in the U.S. banking sector (500+ Nvidia nodes).
β’ Agentic AI: Build a repository for Agentic AI where users can select existing agents or build custom ones for specialized tasks.
β’ CI/CD Integration: Integrate AI deployment pipelines with enterprise-level CI/CD tools like Jenkins and Ansible.
β’ Compliance & Guardrails: Implement corporate-level guardrails and work within Model Risk Management (MRM) frameworks to ensure all AI deployments are secure and compliant.
Required Technical Skills:
β’ Expert Python: Deep, hands-on knowledge is mandatory.
β’ Data Engineering: Extensive experience in massive data ingestion and processing.
β’ RAG Expertise: Deep understanding of vector databases, inferencing, and advanced chunking strategies.
β’ Platform Engineering: Proven experience building tools/platforms that other developers or business units use.
β’ Infrastructure Knowledge: Experience mimicking cloud capabilities (AWS/Azure) within a strictly on-premise environment.
β’ DevOps: Familiarity with Jenkins, Ansible, and automated deployment pipelines.
Experience & Qualifications:
β’ Seniority: This is a senior-level role. We are looking for someone with a proven track record of building production-grade platforms (12-15+ years)
β’ Industry Knowledge: You must stay current with the "latest and greatest" in AI (e.g., rag-less inferencing, agentic frameworks).
β’ Problem Solver: Must be able to take a use case from a business unit and translate it into a scalable platform service.
β’ Experience with Scale: Experience working with large-scale GPU farms and high-volume data environments is highly preferred.
Job Title: Senior AI/GenAI/ ML Engineer
Location: Addison, TX / Charlotte, NC (Onsite)
NOTE: Minimum Exp: 12+ Years required
Role Overview:
We are looking for an AI Platform Engineerβa builder who can architect the "factory" where AI is made. Our goal is to build an internal, on-premises AI ecosystem that mimics the capabilities of AWS or Azure. You will be responsible for creating a horizontal platform used by various lines of business to deploy AI projects simultaneously.
Key Responsibilities:
β’ Platform Architecture: Design and develop a "Model-as-a-Service" platform that allows non-experts to use drag-and-drop components to build AI solutions.
β’ RAG-as-a-Service: Build and optimize end-to-end Retrieval-Augmented Generation (RAG) pipelines, including sophisticated chunking strategies and vector database management.
β’ Tooling & Libraries: Develop and maintain MCP (Model Control Protocol) libraries, clients, and servers to connect various data sources to the AI engine.
β’ Infrastructure Management: Help manage and optimize one of the largest on-premise GPU farms in the U.S. banking sector (500+ Nvidia nodes).
β’ Agentic AI: Build a repository for Agentic AI where users can select existing agents or build custom ones for specialized tasks.
β’ CI/CD Integration: Integrate AI deployment pipelines with enterprise-level CI/CD tools like Jenkins and Ansible.
β’ Compliance & Guardrails: Implement corporate-level guardrails and work within Model Risk Management (MRM) frameworks to ensure all AI deployments are secure and compliant.
Required Technical Skills:
β’ Expert Python: Deep, hands-on knowledge is mandatory.
β’ Data Engineering: Extensive experience in massive data ingestion and processing.
β’ RAG Expertise: Deep understanding of vector databases, inferencing, and advanced chunking strategies.
β’ Platform Engineering: Proven experience building tools/platforms that other developers or business units use.
β’ Infrastructure Knowledge: Experience mimicking cloud capabilities (AWS/Azure) within a strictly on-premise environment.
β’ DevOps: Familiarity with Jenkins, Ansible, and automated deployment pipelines.
Experience & Qualifications:
β’ Seniority: This is a senior-level role. We are looking for someone with a proven track record of building production-grade platforms (12-15+ years)
β’ Industry Knowledge: You must stay current with the "latest and greatest" in AI (e.g., rag-less inferencing, agentic frameworks).
β’ Problem Solver: Must be able to take a use case from a business unit and translate it into a scalable platform service.
β’ Experience with Scale: Experience working with large-scale GPU farms and high-volume data environments is highly preferred.






