

Data Scientist
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Scientist specializing in Generative AI and Knowledge Graphs, offering a remote, full-time contract for over 6 months. Requires 5–8 years of data science experience, proficiency in GraphDBs, and strong Python skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 1, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Seattle, WA
-
🧠 - Skills detailed
#ML (Machine Learning) #Neo4J #Databases #Langchain #NLP (Natural Language Processing) #Data Pipeline #REST (Representational State Transfer) #Graph Databases #AI (Artificial Intelligence) #Model Deployment #Deployment #Data Processing #Data Modeling #Amazon Neptune #JSON (JavaScript Object Notation) #Data Science #Transformers #Knowledge Graph #Python #Libraries #RDF (Resource Description Framework) #TigerGraph #NumPy #Pandas #REST API #Data Wrangling #"ETL (Extract #Transform #Load)" #Computer Science #HBase #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Senior Data Scientist – GenAI & Knowledge Graphs
Location: Remote
Department: AI & Data Science
Employment Type: Full-time for all Visa and H1B for Contract
Job Summary
We are looking for a Senior Data Scientist with hands-on expertise in building Generative AI applications using Knowledge Graphs, Graph Databases, and multi-agent systems.
The ideal candidate will have strong experience in LLM-driven development, agentic AI workflows, and semantic data modeling using OWL and RDF.
Key Responsibilities
• Build and maintain graph-based data pipelines using technologies like Neo4j, Amazon Neptune, or Stardog.
• Design and implement knowledge graphs and ontologies using OWL, Protégé, TopBraid, or similar tools.
• Integrate knowledge graphs with GenAI pipelines, improving context grounding and retrieval for LLM-based applications.
• Develop and orchestrate multi-agent systems using LangGraph, CrewAI, or AutoGen, including agent-to-agent (A2A) communication, memory modules, and reasoning chains.
• Leverage MCP servers and agent runtime engines to deploy agent-based GenAI applications for real-world scenarios such as customer support, content synthesis, and document analysis.
• Work on RAG architectures involving embedding models, vector stores (e.g., FAISS, Pinecone), and structured semantic layers.
• Collaborate with engineers and junior data scientists on project delivery and model deployment.
• Write clean, reusable code in Python using ML/LLM frameworks such as LangChain, HuggingFace, and OpenAI SDKs.
Required Skills & Experience
• 5–8 years of experience in data science, with 2+ years in LLM/GenAI development.
• Hands-on experience with GraphDBs (Neo4j, Neptune, TigerGraph) and SPARQL or Cypher.
• Proficiency in ontology modeling using OWL/RDF and familiarity with semantic reasoning tools.
• Strong experience in building agentic AI applications, including use of LangGraph, AutoGen, or CrewAI.
• Understanding of multi-agent communication protocols (A2A), agent memory, and orchestration layers.
• Solid understanding of data wrangling, NLP, and embedding models (sentencetransformers, OpenAI embeddings, etc.).
• Python proficiency and experience with REST APIs, data processing libraries
(pandas, NumPy), and JSON-LD.
Preferred Qualifications
• Master’s degree in Data Science, Computer Science, AI/ML, or a related field.
• Knowledge of cloud-native deployment is a plus (though not mandatory).
• Open-source contributions, blog posts, or internal project showcases in the
GenAI/Knowledge Graph space.