Optomi

Senior Data Engineer (Graph)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Graph) with a contract length of "Unknown" and a pay rate of "Unknown." Candidates must have 5+ years of data engineering experience, proficiency in Neo4j/Cypher, Python, and ETL workflows for graph datasets, and familiarity with Agile/Scrum.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
720
-
🗓️ - Date
February 12, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Francisco Bay Area
-
🧠 - Skills detailed
#PostgreSQL #Python #Documentation #Neo4J #Schema Design #Databases #"ETL (Extract #Transform #Load)" #MySQL #Cloud #RDBMS (Relational Database Management System) #Automation #Agile #Graph Databases #dbt (data build tool) #Snowflake #Data Engineering #Data Warehouse #Datasets #Storage #Scrum #Scala #Collibra #Programming #Data Quality #Data Pipeline #Airflow
Role description
This is open to candidates located in Seattle, Glendale, Santa Monica, or San Francisco. Senior Data Engineer What you’ll do: • As a Senior Data Engineer, you’ll help transform data into actionable insights by designing and delivering modern, scalable data solutions. You’ll partner with a cross-functional team to build data products and pipelines that improve data-driven decision-making across the organization. Core Requirements • 5+ years of data engineering experience building and supporting production-grade data pipelines. • Neo4j &/or Cypher (or equivalent graph database/query language) • Graph database expertise: strong understanding of graph concepts, tradeoffs vs. RDBMS, and real-world use cases • Proficiency in Python (or another major programming language) • Experience building ETL/ELT workflows for graph-oriented datasets (extracting, transforming, and loading graph data) • Workflow orchestration in production: Airflow strongly preferred (or similar tools like Prefect, etc.) • Experience integrating graph databases with cloud data warehouses (e.g., Neo4j + Snowflake or equivalent) • Relational database experience: PostgreSQL, MySQL, MSSQL • Familiarity with Agile/Scrum Key Responsibilities • Design, build, and maintain data platform pipelines supporting structured, graph, and unstructured datasets • Architect and implement graph database models, including schema design and scalable solution development • Apply strong data engineering principles across cloud services and modern data platforms (storage, compute, messaging/event-driven services, and table formats like Iceberg) • Build and support data transformation, orchestration, and automation workflows (dbt, Airflow) • Implement and monitor data quality and governance practices using tools like Great Expectations, Soda, Collibra (or similar) • Participate in and advocate for Agile/Scrum ceremonies to improve collaboration and delivery • Partner with product managers, architects, and engineers to deliver core data platform capabilities • Create and maintain documentation, standards, and best practices (pipeline configuration, naming conventions, etc.) • Ensure operational excellence of platform datasets to meet SLAs and reliability expectations • Engage internal stakeholders to understand needs, prioritize enhancements, and drive adoption • Maintain detailed documentation of changes to support governance, auditability, and data quality requirements