

Enterprise Engineering Inc. (EEI)
Graph Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Graph Data Engineer in NYC, NY, on a long-term consulting assignment (6+ months). Key skills include Java SpringBoot, AWS experience, and familiarity with graph technologies. A Bachelor's in Computer Science and 4+ years of software development experience are required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 7, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Microservices #Pandas #"ETL (Extract #Transform #Load)" #TigerGraph #GCP (Google Cloud Platform) #Consulting #Python #ML (Machine Learning) #Cloud #Agile #Azure #Datasets #RDF (Resource Description Framework) #Knowledge Graph #Scala #Data Engineering #AWS (Amazon Web Services) #Data Science #Databricks #CRM (Customer Relationship Management) #Data Quality #Neo4J #Spark (Apache Spark) #Computer Science #Java #NumPy
Role description
NYC, NY - 3 days on site
Long Term Consulting Assignment
• Onsite Interview Required
Position Overview
The graph data engineer is responsible for developing and enhancing Jefferies knowledge graph to empower several applications in the firm. You will work in close collaboration with software engineers, data scientists and product managers to build various business cases using graph technologies. You will be also responsible for cleansing, analyzing and ingesting data into the graph store, creating and optimizing knowledge graph queries. This is a unique opportunity to be a member of our Corporate CRM and Analytics Team, tackling our toughest and most exiting graph challenges across multiple divisions in Jefferies.
Responsibilities
• Design and develop graph data solutions leveraging cutting-edge graph technologies.
• Implement graph algorithms to model, query, and analyze complex relationships in large datasets.
• Design and manage graph schemas and ontologies to support dynamic and interconnected data systems.
• Develop and maintain ETL pipelines for ingesting data into the knowledge graph and enriching data correspondingly to serve business requirements.
• Design and develop data quality checks to assure data validity and integrity.
• Solid experience building data engineering applications within AWS.
• Ability to work in an Agile environment.
• Collaborate with full-dev life-cycle team including but not limited to Data Engineering, Data Scientists, Automating Engineers, Infrastructure Engineers, Quality Engineers and Front-end Engineers.
Basic Requirements
• Bachelor's degree or higher in Computer Science or similar academic discipline
• 4+ years of professional software development experience
• Excellent problem solving and critical thinking skills
• Strong understanding of algorithms and data structures and knowing when to apply them
• Proficiency in Java SpringBoot. Python and Scala is a plus
• Experience building APIs and microservices
• Familiarity with ETL and relevant data quality checks
• Experience working with cloud environment setup (AWS, Azure, GCP)
• Ability to understand and learn graph technologies or working with a graph environment.
Preferred Skills
• Experience with graph technologies, e.g. RDF, RDFs, SPARQL, SHACL, Cypher, etc. Triple Stores (Neptune, Allegrograph, GraphDB, etc.), Property Graph Stores (Neo4J, TigerGraph, etc.) will be a strong plus.
• Experience in building ontologies, knowledge graphs, or semantic systems for domain-specific applications.
• Familiarity with Graph and Machine Learning algorithms (PageRank, Connected Components, Cosine Similarity, Pandas, Numpy, Scikit learn)
• Familiarity with full stack development lifecycle including Integration and UI development. Experience with Spark and Databricks.
NYC, NY - 3 days on site
Long Term Consulting Assignment
• Onsite Interview Required
Position Overview
The graph data engineer is responsible for developing and enhancing Jefferies knowledge graph to empower several applications in the firm. You will work in close collaboration with software engineers, data scientists and product managers to build various business cases using graph technologies. You will be also responsible for cleansing, analyzing and ingesting data into the graph store, creating and optimizing knowledge graph queries. This is a unique opportunity to be a member of our Corporate CRM and Analytics Team, tackling our toughest and most exiting graph challenges across multiple divisions in Jefferies.
Responsibilities
• Design and develop graph data solutions leveraging cutting-edge graph technologies.
• Implement graph algorithms to model, query, and analyze complex relationships in large datasets.
• Design and manage graph schemas and ontologies to support dynamic and interconnected data systems.
• Develop and maintain ETL pipelines for ingesting data into the knowledge graph and enriching data correspondingly to serve business requirements.
• Design and develop data quality checks to assure data validity and integrity.
• Solid experience building data engineering applications within AWS.
• Ability to work in an Agile environment.
• Collaborate with full-dev life-cycle team including but not limited to Data Engineering, Data Scientists, Automating Engineers, Infrastructure Engineers, Quality Engineers and Front-end Engineers.
Basic Requirements
• Bachelor's degree or higher in Computer Science or similar academic discipline
• 4+ years of professional software development experience
• Excellent problem solving and critical thinking skills
• Strong understanding of algorithms and data structures and knowing when to apply them
• Proficiency in Java SpringBoot. Python and Scala is a plus
• Experience building APIs and microservices
• Familiarity with ETL and relevant data quality checks
• Experience working with cloud environment setup (AWS, Azure, GCP)
• Ability to understand and learn graph technologies or working with a graph environment.
Preferred Skills
• Experience with graph technologies, e.g. RDF, RDFs, SPARQL, SHACL, Cypher, etc. Triple Stores (Neptune, Allegrograph, GraphDB, etc.), Property Graph Stores (Neo4J, TigerGraph, etc.) will be a strong plus.
• Experience in building ontologies, knowledge graphs, or semantic systems for domain-specific applications.
• Familiarity with Graph and Machine Learning algorithms (PageRank, Connected Components, Cosine Similarity, Pandas, Numpy, Scikit learn)
• Familiarity with full stack development lifecycle including Integration and UI development. Experience with Spark and Databricks.






