

Experis UK
Spark/Scala Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Spark/Scala Developer based in London for 2 months at £308/day. Key skills include expertise in Spark, Scala, ETL workflows, HIVE, Impala, HBase, and Java. Experience with Big Data technologies and optimization is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
308
-
🗓️ - Date
October 22, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Impala #Spark (Apache Spark) #Java #Distributed Computing #Scala #Big Data #Hadoop #HBase #"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #Data Processing
Role description
Role Title: Spark/Scala Developer
Location: London - Days on site: 2-3
2 months
£308
• Expertise on Spark & Scala
• Experience in developing complex data transformation workflows(ETL) using Big Data Technologies
• Good expertise on HIVE, Impala, HBase
• Hands on experience to finetune Spark jobs
• Experience with Java and distributed computing
• In-depth comprehension of Big data/Hadoop technologies, distributed computing, and data processing frameworks
• Exceptional analytical and problem-solving skills, focusing on innovative and efficient solutions.
• Demonstrable experience in optimizing and fine-tuning big data applications for heightened performance and efficiency. -
• Hands-on experience with relevant tools such as Apache Hadoop, Spark, Kafka, and other industry-standard platforms.
• Good to have : External technology contributions (Noteworthy Open Source attributions).
Role Title: Spark/Scala Developer
Location: London - Days on site: 2-3
2 months
£308
• Expertise on Spark & Scala
• Experience in developing complex data transformation workflows(ETL) using Big Data Technologies
• Good expertise on HIVE, Impala, HBase
• Hands on experience to finetune Spark jobs
• Experience with Java and distributed computing
• In-depth comprehension of Big data/Hadoop technologies, distributed computing, and data processing frameworks
• Exceptional analytical and problem-solving skills, focusing on innovative and efficient solutions.
• Demonstrable experience in optimizing and fine-tuning big data applications for heightened performance and efficiency. -
• Hands-on experience with relevant tools such as Apache Hadoop, Spark, Kafka, and other industry-standard platforms.
• Good to have : External technology contributions (Noteworthy Open Source attributions).