

GIOS Technology
Spark & Scala Developer (Java, Hadoop, Kafka, ETL, Big Data, Performance Optimization, Banking)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Spark & Scala Developer with expertise in Java, Hadoop, Kafka, and ETL, focusing on performance optimization in the banking sector. It is a hybrid position in London, offering a competitive pay rate for a contract of unspecified length.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 22, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Impala #Spark (Apache Spark) #Java #Distributed Computing #Data Pipeline #Scala #Big Data #Hadoop #HBase #"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #Data Processing
Role description
I am hiring for Spark & Scala Developer (Java, Hadoop, Kafka, ETL, Big Data, Performance Optimization)
Location: London (Hybrid, 2–3 days onsite)
Job Description
• Develop and maintain complex data transformation workflows (ETL) using Big Data technologies.
• Design, optimize, and fine-tune Spark applications for performance and scalability.
• Work extensively with HIVE, Impala, and HBase for data processing and management.
• Implement distributed computing solutions using Spark, Scala, and Java.
• Collaborate with cross-functional teams to deliver high-performance data solutions.
• Contribute to process improvements and performance optimization across Big Data pipelines.
Key Skills
Spark, Scala, Java, Hive, Impala, HBase, Hadoop, Kafka, ETL, Big Data, Distributed Computing, Performance Optimization
I am hiring for Spark & Scala Developer (Java, Hadoop, Kafka, ETL, Big Data, Performance Optimization)
Location: London (Hybrid, 2–3 days onsite)
Job Description
• Develop and maintain complex data transformation workflows (ETL) using Big Data technologies.
• Design, optimize, and fine-tune Spark applications for performance and scalability.
• Work extensively with HIVE, Impala, and HBase for data processing and management.
• Implement distributed computing solutions using Spark, Scala, and Java.
• Collaborate with cross-functional teams to deliver high-performance data solutions.
• Contribute to process improvements and performance optimization across Big Data pipelines.
Key Skills
Spark, Scala, Java, Hive, Impala, HBase, Hadoop, Kafka, ETL, Big Data, Distributed Computing, Performance Optimization