

Talent Groups
Knowledge Graph Engineer/Architect
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Knowledge Graph Engineer/Architect, offering a contract of "contract length" at a pay rate of "pay rate". Required skills include 3–10+ years in data engineering, expertise in graph databases, and proficiency in SPARQL or Cypher.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
December 11, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Metadata #Visualization #RDF (Resource Description Framework) #Azure Data Factory #Data Science #AI (Artificial Intelligence) #Python #Azure Cosmos DB #Data Ingestion #Knowledge Graph #Azure #Datasets #GraphQL #Computer Science #API (Application Programming Interface) #Semantic Models #Databricks #Databases #ML (Machine Learning) #Schema Design #Data Engineering #ADF (Azure Data Factory) #Data Enrichment #Neo4J #Langchain #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Data Modeling #Java #"ETL (Extract #Transform #Load)" #TigerGraph #Airflow #HBase #Data Quality #Graph Databases
Role description
Role Overview:
We are looking for an experienced Knowledge Graph Engineer/Architect to design, build, and scale our enterprise knowledge graph and semantic data platforms. The ideal candidate has strong experience in knowledge representation, graph databases, ontology modeling, and integrating structured/unstructured data to power search, reasoning, and intelligent applications.
Key Responsibilities
• Design and develop knowledge graphs, ontologies, and taxonomies aligned with business domains.
• Build semantic models using RDF, OWL, SHACL, or property graph approaches.
• Implement and maintain graph databases (e.g., Neo4j, AWS Neptune, Azure Cosmos DB Gremlin API, TigerGraph).
• Ingest heterogeneous datasets and convert them into graph structures using ETL pipelines.
• Work closely with product, data, and engineering teams to understand requirements and map real-world entities into graph models.
• Develop and optimise SPARQL, Cypher, or Gremlin queries for graph analytics.
• Integrate the knowledge graph with downstream systems (search, RAG pipelines, analytics, APIs).
• Ensure data quality, entity resolution, schema alignment, and consistency across sources.
• Implement reasoning, inference, metadata enrichment, and graph-based recommendations.
• Conduct POCs, evaluate graph technologies, and define best practices for knowledge graph architecture.
Required Skills & Experience
• 3–10+ years of experience in data engineering, semantic technologies, or knowledge graphs.
• Strong understanding of graph theory, ontologies, and linked data principles.
• Hands-on experience with at least one major graph database:
• Neo4j
• AWS Neptune
• Azure Cosmos DB (Gremlin)
• TigerGraph
• Expertise in SPARQL, Cypher, or Gremlin.
• Experience with Python or Java for graph data ingestion and pipeline development.
• Knowledge of ETL/ELT, data modeling, schema design, and API integration.
• Familiarity with modern AI/ML workflows, especially RAG (Retrieval-Augmented Generation) or LLM-driven applications is a plus.
• Understanding of semantic web standards (RDF/OWL), entity resolution, and graph embeddings.
Nice-to-Have Skills
• Experience with vector databases and hybrid search.
• Exposure to LLM orchestration, RAG frameworks, or knowledge-grounded AI systems.
• Background in enterprise domains such as retail, healthcare, insurance, fintech.
• Familiarity with tools like GraphQL, LangChain, Kafka, Airflow, Databricks, or Azure Data Factory.
• Knowledge of knowledge graph visualization tools (Bloom, GraphXR).
Soft Skills
• Strong analytical and problem-solving ability.
• Ability to translate business needs into data models.
• Excellent communication and cross-functional collaboration.
• Ownership mindset with ability to work in ambiguous environments.
Education
• Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or related field.
• Certifications in semantic technologies or graph databases are a plus.
Role Overview:
We are looking for an experienced Knowledge Graph Engineer/Architect to design, build, and scale our enterprise knowledge graph and semantic data platforms. The ideal candidate has strong experience in knowledge representation, graph databases, ontology modeling, and integrating structured/unstructured data to power search, reasoning, and intelligent applications.
Key Responsibilities
• Design and develop knowledge graphs, ontologies, and taxonomies aligned with business domains.
• Build semantic models using RDF, OWL, SHACL, or property graph approaches.
• Implement and maintain graph databases (e.g., Neo4j, AWS Neptune, Azure Cosmos DB Gremlin API, TigerGraph).
• Ingest heterogeneous datasets and convert them into graph structures using ETL pipelines.
• Work closely with product, data, and engineering teams to understand requirements and map real-world entities into graph models.
• Develop and optimise SPARQL, Cypher, or Gremlin queries for graph analytics.
• Integrate the knowledge graph with downstream systems (search, RAG pipelines, analytics, APIs).
• Ensure data quality, entity resolution, schema alignment, and consistency across sources.
• Implement reasoning, inference, metadata enrichment, and graph-based recommendations.
• Conduct POCs, evaluate graph technologies, and define best practices for knowledge graph architecture.
Required Skills & Experience
• 3–10+ years of experience in data engineering, semantic technologies, or knowledge graphs.
• Strong understanding of graph theory, ontologies, and linked data principles.
• Hands-on experience with at least one major graph database:
• Neo4j
• AWS Neptune
• Azure Cosmos DB (Gremlin)
• TigerGraph
• Expertise in SPARQL, Cypher, or Gremlin.
• Experience with Python or Java for graph data ingestion and pipeline development.
• Knowledge of ETL/ELT, data modeling, schema design, and API integration.
• Familiarity with modern AI/ML workflows, especially RAG (Retrieval-Augmented Generation) or LLM-driven applications is a plus.
• Understanding of semantic web standards (RDF/OWL), entity resolution, and graph embeddings.
Nice-to-Have Skills
• Experience with vector databases and hybrid search.
• Exposure to LLM orchestration, RAG frameworks, or knowledge-grounded AI systems.
• Background in enterprise domains such as retail, healthcare, insurance, fintech.
• Familiarity with tools like GraphQL, LangChain, Kafka, Airflow, Databricks, or Azure Data Factory.
• Knowledge of knowledge graph visualization tools (Bloom, GraphXR).
Soft Skills
• Strong analytical and problem-solving ability.
• Ability to translate business needs into data models.
• Excellent communication and cross-functional collaboration.
• Ownership mindset with ability to work in ambiguous environments.
Education
• Bachelor’s or Master’s degree in Computer Science, Data Science, Information Systems, or related field.
• Certifications in semantic technologies or graph databases are a plus.





