Saransh Inc

Sr. GenAI Engineer with Graph DB, Vector DB, Neo4j

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. GenAI Engineer with 10+ years of experience, specializing in Graph DB, Vector DB, Neo4j, and RAG. It’s a contract position in Dallas, TX, requiring onsite work 3 days a week, with a focus on financial data infrastructure.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 21, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#ML (Machine Learning) #Deployment #Scala #Neo4J #Data Governance #Knowledge Graph #AI (Artificial Intelligence) #Data Pipeline #Cloud #Compliance
Role description
Role: Sr. GenAI Engineer with Graph DB, Vector DB, Neo4j & RAG Location: Dallas, TX (3 days a week onsite is required) Job Type: Contract Experience Required: 10+ years Required Skills Strong expertise in knowledge graph, Graph DB, Vector DB, Neo4j, RAG, and similar technologies. About The Role • One of the finance clients is seeking a Sr Gen AI Engineer(10+ years of exp.) with strong expertise in knowledge graph, Graph DB, Vector DB, Neo4j, RAG, and similar technologies. • This engineer will design and implement data infrastructure that enables efficient fine-tuning and deployment of large language models (LLMs) on client servers for low-latency inference. • The role demands a hands-on technologist who can architect, build, and optimize data systems serving enterprise-grade AI use cases. Key Responsibilities • Design and implement GraphDB and VectorDB solutions to store, query, and retrieve structured and unstructured financial data. • Build knowledge graph pipelines integrating multiple data sources to support LLM fine-tuning and retrieval-augmented generation workflows. • Set up scalable data pipelines for model training, embedding generation, and data preprocessing • Collaborate with AI researchers and ML engineers to prepare data and infrastructure for fine-tuning open-source or proprietary LLMs. • Deploy and optimize model hosting for fast inference on on-prem or cloud GPU servers. • • Ensure data governance, lineage and compliance with internal and regulatory standards.