

Santcore Technologies
Graph Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Graph Data Engineer in Rockville, MD or Tysons, VA (Hybrid – 3 Days Onsite) with a contract length of unspecified duration. Pay rate is not provided. Requires 3-5 years in Scala, Python, or Java, and 3+ years in data analysis. Proficiency in SQL and experience with AWS and Apache Spark is essential. Preferred qualifications include Neo4j experience and enterprise financial industry exposure.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
504
-
🗓️ - Date
February 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Rockville, MD
-
🧠 - Skills detailed
#Databases #Datasets #Scala #Data Analysis #Compliance #Spark (Apache Spark) #Apache Spark #Debugging #Agile #Presto #Scripting #Python #BI (Business Intelligence) #Data Architecture #SQL (Structured Query Language) #Data Processing #AI (Artificial Intelligence) #Big Data #Data Pipeline #Java #Neo4J #ML (Machine Learning) #AWS (Amazon Web Services) #Graph Databases #Data Engineering
Role description
Position: Graph Data Engineer
📍 Rockville, MD or Tysons, VA (Hybrid – 3 Days Onsite)
📝 Interview Process: 2 Rounds (Final Round Includes Coding Exercise)
Position Overview
Our client is seeking a highly skilled Graph Data Engineer to join their growing data engineering team. This role focuses on large-scale data analytics, graph architecture design, and advanced data processing within enterprise financial environments.
The ideal candidate will bring deep expertise in graph databases (Neo4j preferred), advanced SQL, AWS, and backend development using Python, Java, or Scala, along with strong experience in big data ecosystems.
Key Responsibilities
• Design, build, and support graph data architecture
• Perform large-scale data analytics and processing using Spark
• Profile and onboard new data sources
• Identify patterns, relationships, and insights within complex datasets
• Develop and optimize data pipelines in AWS environments
• Collaborate with upstream/downstream engineering teams to enhance value delivery
• Participate in Agile ceremonies and contribute to team initiatives
• Ensure high-quality, secure, and scalable solutions within financial domain environments
Required Qualifications
• 3–5 years of development experience in Scala, Python, or Java (OOP + scripting)
• 3+ years in data analysis within big data, graph, BI, or analytics environments
• 2+ years hands-on experience with AWS
• Strong, near-expert level proficiency in SQL
• Experience with large-scale processing engines such as:
• Apache Spark
• Presto
• Equivalent distributed processing frameworks
• Strong analytical mindset with curiosity for exploring data and business outcomes
• Knowledge of AI-assisted coding, testing, and debugging tools
Preferred Qualifications
• 1+ year experience with Neo4j (Highly Preferred)
• Enterprise Financial Industry experience
• Exposure to Machine Learning environments
Technical Stack
• Graph Databases (Neo4j preferred)
• AWS
• Spark / Presto
• SQL (Advanced)
• Python / Java / Scala
• Big Data Ecosystems
Compliance & Submission Requirements
Due to prior interview integrity issues, submissions must meet strict compliance:
• Only submit candidates you have personally worked with or thoroughly vetted
• LinkedIn profile required (established profile preferred)
• Two managerial references (work emails required)
• Candidate must be comfortable with a live coding exercise
• Strong communication skills are mandatory
Position: Graph Data Engineer
📍 Rockville, MD or Tysons, VA (Hybrid – 3 Days Onsite)
📝 Interview Process: 2 Rounds (Final Round Includes Coding Exercise)
Position Overview
Our client is seeking a highly skilled Graph Data Engineer to join their growing data engineering team. This role focuses on large-scale data analytics, graph architecture design, and advanced data processing within enterprise financial environments.
The ideal candidate will bring deep expertise in graph databases (Neo4j preferred), advanced SQL, AWS, and backend development using Python, Java, or Scala, along with strong experience in big data ecosystems.
Key Responsibilities
• Design, build, and support graph data architecture
• Perform large-scale data analytics and processing using Spark
• Profile and onboard new data sources
• Identify patterns, relationships, and insights within complex datasets
• Develop and optimize data pipelines in AWS environments
• Collaborate with upstream/downstream engineering teams to enhance value delivery
• Participate in Agile ceremonies and contribute to team initiatives
• Ensure high-quality, secure, and scalable solutions within financial domain environments
Required Qualifications
• 3–5 years of development experience in Scala, Python, or Java (OOP + scripting)
• 3+ years in data analysis within big data, graph, BI, or analytics environments
• 2+ years hands-on experience with AWS
• Strong, near-expert level proficiency in SQL
• Experience with large-scale processing engines such as:
• Apache Spark
• Presto
• Equivalent distributed processing frameworks
• Strong analytical mindset with curiosity for exploring data and business outcomes
• Knowledge of AI-assisted coding, testing, and debugging tools
Preferred Qualifications
• 1+ year experience with Neo4j (Highly Preferred)
• Enterprise Financial Industry experience
• Exposure to Machine Learning environments
Technical Stack
• Graph Databases (Neo4j preferred)
• AWS
• Spark / Presto
• SQL (Advanced)
• Python / Java / Scala
• Big Data Ecosystems
Compliance & Submission Requirements
Due to prior interview integrity issues, submissions must meet strict compliance:
• Only submit candidates you have personally worked with or thoroughly vetted
• LinkedIn profile required (established profile preferred)
• Two managerial references (work emails required)
• Candidate must be comfortable with a live coding exercise
• Strong communication skills are mandatory





