

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," and requires expertise in GCP, Neo4j, ETL/ELT workflows, and data governance. Experience in graph databases and large-scale data ingestion is essential.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
June 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United Kingdom
-
π§ - Skills detailed
#Data Management #Data Engineering #Knowledge Graph #Metadata #Spark (Apache Spark) #BigQuery #Apache Beam #Neo4J #Scala #Data Governance #Security #Data Catalog #Dataflow #Data Pipeline #IAM (Identity and Access Management) #"ETL (Extract #Transform #Load)" #Datasets #Storage #GCP (Google Cloud Platform) #Kafka (Apache Kafka) #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Requirements:
ο· - Architect and build data pipelines on GCP integrating structured, semi-structured, and
unstructured data sources.
ο· - Design, implement, and optimize Graph Database solutions using Neo4j, Cypher
queries, and GCP integrations.
ο· - Develop ETL/ELT workflows using Dataflow, Pub/Sub, BigQuery, and Cloud Storage.
ο· - Design graph models for real-world applications such as fraud detection, network
analysis, and knowledge graphs.
ο· - Optimize performance of graph queries and design for scalability.
ο· - Support ingestion of large-scale datasets using Apache Beam, Spark, or Kafka into GCP
environments.
ο· - Implement metadata management, security, and data governance using Data Catalog
and IAM.
ο· - Work across functional teams and clients in diverse EMEA time zones and project
settings.
Thanks & Regards,
Pooja K | Technical Recruiter
UK/EU AmpsTek Services Limited
Mail ID: pooja.k@ampstek.com