

Ursus, Inc.
Fraud Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Fraud Data Engineer in San Jose, CA, for 12 months, with a pay rate of $63-$73/hr. Requires 8+ years in data engineering, expertise in ETL/ELT, and hands-on Neo4j experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
584
-
🗓️ - Date
November 20, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Jose, CA
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Databricks #Scala #"ETL (Extract #Transform #Load)" #PySpark #Computer Science #S3 (Amazon Simple Storage Service) #Python #Azure #Storage #Data Science #Version Control #Monitoring #Data Engineering #Data Modeling #Statistics #Data Quality #Base #SQL (Structured Query Language) #Airflow #Spark (Apache Spark) #Neo4J #Databases #Graph Databases #GIT #Batch #ML (Machine Learning) #Cloud #Data Pipeline #DataOps #AWS (Amazon Web Services) #Apache Airflow #Azure Blob Storage #AWS S3 (Amazon Simple Storage Service) #Mathematics
Role description
JOB TITLE: Fraud Data Engineer
LOCATION: San Jose, CA
PAY RANGE: $63-$73/hr.
DURATION: 12 months
TOP 3 SKILLS:
• 8+ years of experience in data engineering or a related field.
• Proven success in ETL/ELT design and implementation, including batch and streaming pipelines.
• Hands-on experience with Neo4j (data modeling, Cypher, query optimization).
Company:
Our client is a global leader in creative software, offering innovative tools for digital media creation, design, and marketing.
About the Team
Our Consumer Trust team is composed of skilled data engineers, data scientists, and analysts who work collaboratively to detect, prevent, and mitigate fraud. We leverage advanced data platforms, graph technologies, and AI-driven insights to protect our customers and strengthen trust across the ecosystem.
The Opportunity
We are seeking an experienced Data Engineering Contractor to help design and scale our next-generation fraud data infrastructure. The ideal candidate should have deep technical expertise in building and optimizing large-scale data pipelines—both batch and real-time—along with experience in graph databases (Neo4j) and modern cloud data platforms.
Key Responsibilities
• Design, build, and maintain robust ETL/ELT pipelines (batch and streaming) for structured and unstructured data using SQL and Python/PySpark.
• Collaborate with data scientists and business stakeholders to model, query, and visualize complex entity relationships using Neo4j.
• Optimize Neo4j data models and Cypher queries for scalability and performance.
• Build and manage large-scale ML feature stores and integrate them with AI and agentic workflows.
• Develop and maintain integrations across AWS (S3) and Azure (Blob Storage, VMs), and third-party threat intelligence APIs to enrich fraud detection and investigation workflows.
• Automate workflows using Apache Airflow or equivalent orchestration tools.
• Apply DataOps best practices, including version control (Git), CI/CD, and monitoring for reliability and maintainability.
• Implement and enforce data quality, lineage, and governance standards across all data assets.
Qualifications
• 8+ years of experience in data engineering or a related field.
• Proven success in ETL/ELT design and implementation, including batch and streaming pipelines.
• Strong proficiency in SQL, Python, and PySpark.
• Hands-on experience with Neo4j (data modeling, Cypher, query optimization).
• Experience building ML feature stores and integrating with AI/ML pipelines.
• Working knowledge of AWS and Azure data services.
• Familiarity with Apache Airflow or similar orchestration tools.
• Proficient in Git and CI/CD workflows.
• Strong understanding of data quality, lineage, and governance.
• Nice to Have: Experience with Databricks.
Education
• Master’s degree in Statistics, Mathematics, Computer Science, or a related field (or bachelor’s degree with equivalent experience).
BENEFITS SUMMARY: Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate or annual salary only, unless otherwise stated. In addition to base compensation, full-time roles are eligible for Medical, Dental, Vision, Commuter and 401K benefits with company matching.
IND123
JOB TITLE: Fraud Data Engineer
LOCATION: San Jose, CA
PAY RANGE: $63-$73/hr.
DURATION: 12 months
TOP 3 SKILLS:
• 8+ years of experience in data engineering or a related field.
• Proven success in ETL/ELT design and implementation, including batch and streaming pipelines.
• Hands-on experience with Neo4j (data modeling, Cypher, query optimization).
Company:
Our client is a global leader in creative software, offering innovative tools for digital media creation, design, and marketing.
About the Team
Our Consumer Trust team is composed of skilled data engineers, data scientists, and analysts who work collaboratively to detect, prevent, and mitigate fraud. We leverage advanced data platforms, graph technologies, and AI-driven insights to protect our customers and strengthen trust across the ecosystem.
The Opportunity
We are seeking an experienced Data Engineering Contractor to help design and scale our next-generation fraud data infrastructure. The ideal candidate should have deep technical expertise in building and optimizing large-scale data pipelines—both batch and real-time—along with experience in graph databases (Neo4j) and modern cloud data platforms.
Key Responsibilities
• Design, build, and maintain robust ETL/ELT pipelines (batch and streaming) for structured and unstructured data using SQL and Python/PySpark.
• Collaborate with data scientists and business stakeholders to model, query, and visualize complex entity relationships using Neo4j.
• Optimize Neo4j data models and Cypher queries for scalability and performance.
• Build and manage large-scale ML feature stores and integrate them with AI and agentic workflows.
• Develop and maintain integrations across AWS (S3) and Azure (Blob Storage, VMs), and third-party threat intelligence APIs to enrich fraud detection and investigation workflows.
• Automate workflows using Apache Airflow or equivalent orchestration tools.
• Apply DataOps best practices, including version control (Git), CI/CD, and monitoring for reliability and maintainability.
• Implement and enforce data quality, lineage, and governance standards across all data assets.
Qualifications
• 8+ years of experience in data engineering or a related field.
• Proven success in ETL/ELT design and implementation, including batch and streaming pipelines.
• Strong proficiency in SQL, Python, and PySpark.
• Hands-on experience with Neo4j (data modeling, Cypher, query optimization).
• Experience building ML feature stores and integrating with AI/ML pipelines.
• Working knowledge of AWS and Azure data services.
• Familiarity with Apache Airflow or similar orchestration tools.
• Proficient in Git and CI/CD workflows.
• Strong understanding of data quality, lineage, and governance.
• Nice to Have: Experience with Databricks.
Education
• Master’s degree in Statistics, Mathematics, Computer Science, or a related field (or bachelor’s degree with equivalent experience).
BENEFITS SUMMARY: Individual compensation is determined by skills, qualifications, experience, and location. Compensation details listed in this posting reflect the base hourly rate or annual salary only, unless otherwise stated. In addition to base compensation, full-time roles are eligible for Medical, Dental, Vision, Commuter and 401K benefits with company matching.
IND123






