Neo4j Graph Candidate

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Neo4j Graph Developer in New York, NY, hybrid onsite for 12 months, offering competitive pay. Requires 3+ years in money movement/trade monitoring within financial services, 8+ years in IT, and advanced skills in Neo4j and Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 19, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Python #Neo4J #Indexing #SQL (Structured Query Language) #Spark (Apache Spark) #Impala #GIT #Cloud #Data Science #Batch #Bash #Version Control #Cloudera #SQL Server #Apache Spark #Agile #Deployment #Monitoring #Linux #Data Architecture #Programming #Data Engineering #Scripting #Databases #Database Design #Kafka (Apache Kafka) #ML (Machine Learning) #Hadoop #Data Ingestion
Role description
Neo4j Graph Developer Location: New York, NY – Hybrid Onsite Duration: 12 months Position Summary Seeking an experienced Neo4j Graph Developer with strong expertise in graph database design, data science, and real-time data engineering. The ideal candidate will have a background in financial services—particularly money movement, trade monitoring, AML, fraud, or surveillance—and the ability to architect, optimize, and implement complex graph solutions. Required Skills & Experience • Domain Expertise: Minimum 3 years’ experience in Money Movement / Trade Monitoring (AML, Fraud, Surveillance) within Wealth Management financial services. • Neo4j Graph Database: 8+ years overall IT experience, with at least 3 years as an Architect/Senior Developer using Neo4j. • Advanced proficiency with Cypher queries, Graph Data Science, and data ingestion. • Ability to review and recommend database and infrastructure configurations for optimal performance and resilience. • Skilled in query performance tuning (indexing, modeling, etc.). • Programming & Data Engineering: • Strong Python skills for batch data engineering on Apache Spark, populating Neo4j, and generating downstream data science feeds. • Real-time service integration with Kafka to persist business events in Neo4j. • Near real-time stream processing to derive ML model inference features. • Analytics & ML: Advanced analytics expertise to solve complex graph problems using AI & ML techniques. • Full SDLC: Experience across requirements analysis, data architecture/modeling, development, testing, and deployment. • Strong SQL skills to query databases such as SQL Server and Impala. • Excellent communication, interpersonal, analytical, and problem-solving abilities. • Quick learner, able to adapt to new technologies and techniques. Desired Skills • Experience with Cloudera Hadoop. • Agile development environment experience. • Proficiency with version control systems (e.g., GIT). • Bash scripting and Linux environment expertise. • Familiarity with CI/CD systems. • Hands-on experience with job scheduling tools such as Autosys (preferred) or Control-M. • Ability to write Python/Shell scripts in Linux.