

Graph Data Pipeline Engineer - W2 Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Graph Data Pipeline Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include ETL/ELT pipeline development, graph databases (Neo4j, Amazon Neptune), and proficiency in Python, Scala, or Java. A degree in Computer Science or related field and 4+ years of data engineering experience are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
San Francisco Bay Area
-
π§ - Skills detailed
#Scala #TigerGraph #Cloud #Data Pipeline #AWS (Amazon Web Services) #Graph Databases #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Computer Science #Data Engineering #Databases #Hadoop #Lambda (AWS Lambda) #Java #Python #Data Processing #AWS Glue #Neo4J #Data Modeling #S3 (Amazon Simple Storage Service) #Amazon Neptune
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
A technology services client of ours is looking for a Graph Data Pipeline Engineer for their ongoing projects.
Below are the additional details of this role:
Required Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Data Engineering, or a related field.
β’ 4+ years of experience in data engineering, including building ETL/ELT pipelines.
β’ Hands-on experience with graph databases such as Neo4j, Amazon Neptune, JanusGraph, or TigerGraph.
β’ Proficiency in Python, Scala, or Java for data processing and pipeline development.
β’ Experience with data modeling, particularly in graph theory and network analysis.
β’ Familiarity with cloud services (e.g., AWS Glue, Lambda, S3, EMR) and distributed data systems (e.g., Spark, Hadoop).
β’ Strong knowledge of query languages such as Cypher, Gremlin, or SPARQL.
This role can be W2 and open for USC/GC/H1B/EAD resources.