Anblicks

Graph Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Graph Engineer with 8+ years in Data Engineering or Software Development, 2+ years in Knowledge Graph development. Requires expertise in graph databases, Python, data pipelines, and cloud platforms. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#GIT #Scala #Neo4J #Python #Data Engineering #TigerGraph #Spark (Apache Spark) #MDM (Master Data Management) #Data Modeling #DevOps #Data Pipeline #dbt (data build tool) #Apache Spark #Knowledge Graph #RDF (Resource Description Framework) #Cloud #Version Control #AWS (Amazon Web Services) #Airflow #Programming #"ETL (Extract #Transform #Load)" #Databases #Data Manipulation #Graph Databases #Data Management #Data Transformations #Kafka (Apache Kafka) #Azure
Role description
Experience • 8+ years of hands-on technical experience in Data Engineering, Software Development, or Analytics. • 2+ years dedicated hands-on experience in Knowledge Graph development and relationship mapping, preferably with customer or client data. Technical Skills • Knowledge Graph/Ontology: Deep practical expertise in graph data modeling, ontology development, and semantic modeling principles (RDF, RDFS, OWL, SHACL). • Graph Databases: Proven hands-on experience with at least one major Graph Database technology such as Neo4j, AWS Neptune, TigerGraph, or JanusGraph, and expertise in native query languages (Cypher, SPARQL, or Gremlin). • Graph Analytics: Strong experience with graph algorithms including community detection, centrality measures, path finding, pattern matching, and relationship scoring. • Programming: Strong proficiency in Python for data manipulation, graph algorithm implementation, and data transformations. • Data Engineering: Solid experience building scalable data pipelines using modern tools such as Apache Spark, Kafka, Airflow, or dbt. • Customer Data: Understanding of customer master data management, and entity resolution techniques. • Cloud & DevOps: Experience with version control (Git) and familiarity with CI/CD processes and major cloud platforms (AWS, Azure).