STAFFXPERT LLC

Graph Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Graph Data Engineer in New York, NY (Hybrid – 3 Days Onsite) with a contract length of "Unknown" and a pay rate of "Unknown." Key skills include Java, Spring Boot, ETL processes, and experience with graph databases.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Agile #Microservices #Data Ingestion #Data Accuracy #Spring Boot #Scala #"ETL (Extract #Transform #Load)" #HBase #Databricks #Data Engineering #Java #Computer Science #Knowledge Graph #RDF (Resource Description Framework) #TigerGraph #Spark (Apache Spark) #Pandas #Data Quality #Python #Data Science #Libraries #GCP (Google Cloud Platform) #Azure #Big Data #Datasets #Cloud #NumPy #Graph Databases #Databases #ML (Machine Learning) #Neo4J
Role description
Graph Data Engineer New York, NY (Hybrid – 3 Days Onsite) Job Summary STAFFXPERT LLC is seeking a Graph Data Engineer on behalf of our client in New York, NY. This role involves designing, developing, and enhancing a large-scale knowledge graph platform that powers advanced data analytics and business applications. The ideal candidate will collaborate with software engineers, data scientists, and product managers to build scalable graph-based solutions, manage data ingestion pipelines, and implement graph algorithms to analyze complex relationships within enterprise datasets. Key Responsibilities • Design and develop graph-based data solutions using modern graph technologies. • Implement graph algorithms to model, query, and analyze complex relationships in large datasets. • Design and maintain graph schemas and ontologies to support dynamic and interconnected data systems. • Develop and manage ETL pipelines for ingesting and enriching data within a knowledge graph environment. • Implement data quality checks to ensure data accuracy, integrity, and reliability. • Develop and maintain APIs and microservices that support graph-based applications. • Build and support data engineering solutions in cloud environments, particularly AWS. • Collaborate with cross-functional teams including data engineers, data scientists, QA engineers, infrastructure engineers, and front-end developers. • Participate in Agile development processes and contribute to the full software development lifecycle. Required Qualifications • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. • 4+ years of professional software development experience. • Strong understanding of algorithms, data structures, and software engineering best practices. • Proficiency in Java with Spring Boot. • Experience developing RESTful APIs and microservices. • Hands-on experience with ETL processes and data quality validation techniques. • Experience working with cloud platforms such as AWS, Azure, or GCP. • Ability to work with or quickly learn graph technologies and graph-based data environments. • Strong problem-solving, analytical, and collaboration skills. Preferred Qualifications • Experience with graph technologies such as RDF, RDFS, SPARQL, SHACL, or Cypher. • Experience with graph databases including Neo4j, Neptune, TigerGraph, GraphDB, or AllegroGraph. • Experience building knowledge graphs, ontologies, or semantic data systems. • Familiarity with graph algorithms and machine learning techniques such as PageRank, Connected Components, or Cosine Similarity. • Experience with Python data libraries such as Pandas, NumPy, and Scikit-learn. • Exposure to big data tools such as Spark or Databricks. • Understanding of the full software development lifecycle, including integration and UI development.