

Hadoop Scala Developer – Northampton
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop Scala Developer in Northampton, offering a PAYE/Umbrella contract with 2 days onsite and 3 days WFH. Key skills include Hadoop, Scala, Spark, and Kafka. Strong analytical and leadership skills are essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
440
-
🗓️ - Date discovered
August 20, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Inside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Northampton, England, United Kingdom
-
🧠 - Skills detailed
#Leadership #Databases #Spark (Apache Spark) #NoSQL #Metadata #Unit Testing #Data Processing #Data Engineering #Apache Spark #Hadoop #Distributed Computing #GIT #Kafka (Apache Kafka) #Big Data #HDFS (Hadoop Distributed File System) #Data Science #Batch #Airflow #Version Control #Data Quality #Data Pipeline #YARN (Yet Another Resource Negotiator) #Data Management #HBase #Scala
Role description
Hadoop Scala Developer – Northampton
Atrium EMEA are seeking a highly skilled Hadoop and Scala Developer to join the data engineering team onsite in Northampton. You will be responsible for building and optimizing data processing pipelines and big data solutions. Your primary focus will be on the Hadoop ecosystem and Scala-based applications, contributing to the design and development of scalable, robust, and high-performance data systems. PAYE/Umbrella. 2 days onsite and 3 days WFH.
Essential:
• Design and implement large-scale data processing solutions using Hadoop, Spark, Hive, and Scala.
• Develop and maintain scalable data pipelines for real-time and batch data processing.
• Optimize performance of existing data workflows and troubleshoot complex data issues.
• Collaborate with data scientists, analysts, and other engineers to support data-driven initiatives.
• Work with HDFS, MapReduce, YARN, and related components to ensure reliable data movement and processing.
• Write clean, maintainable, and efficient Scala code following best practices.
• Implement data quality checks, lineage, and metadata management.
• Automate and monitor data pipelines with tools like Oozie, Airflow, or similar.
• Stay updated with industry trends in big data technologies and introduce new tools when appropriate.
• Deep understanding of Apache Spark and performance tuning.
• Experience with Kafka, HBase, NoSQL databases, or other big data technologies.
• Solid understanding of distributed computing principles.
• Experience with CI/CD, version control (Git), and unit testing frameworks.
• Strong problem-solving, analytical, and communication skills.
• Strong leadership and mentorship skills.
• Ability to work independently and collaboratively in a fast-paced environment.
• Excellent verbal and written communication skills.
Click Apply Now /Contact Lianne to be considered for the Hadoop Scala Developer – Northampton role
Hadoop Scala Developer – Northampton
Atrium EMEA are seeking a highly skilled Hadoop and Scala Developer to join the data engineering team onsite in Northampton. You will be responsible for building and optimizing data processing pipelines and big data solutions. Your primary focus will be on the Hadoop ecosystem and Scala-based applications, contributing to the design and development of scalable, robust, and high-performance data systems. PAYE/Umbrella. 2 days onsite and 3 days WFH.
Essential:
• Design and implement large-scale data processing solutions using Hadoop, Spark, Hive, and Scala.
• Develop and maintain scalable data pipelines for real-time and batch data processing.
• Optimize performance of existing data workflows and troubleshoot complex data issues.
• Collaborate with data scientists, analysts, and other engineers to support data-driven initiatives.
• Work with HDFS, MapReduce, YARN, and related components to ensure reliable data movement and processing.
• Write clean, maintainable, and efficient Scala code following best practices.
• Implement data quality checks, lineage, and metadata management.
• Automate and monitor data pipelines with tools like Oozie, Airflow, or similar.
• Stay updated with industry trends in big data technologies and introduce new tools when appropriate.
• Deep understanding of Apache Spark and performance tuning.
• Experience with Kafka, HBase, NoSQL databases, or other big data technologies.
• Solid understanding of distributed computing principles.
• Experience with CI/CD, version control (Git), and unit testing frameworks.
• Strong problem-solving, analytical, and communication skills.
• Strong leadership and mentorship skills.
• Ability to work independently and collaboratively in a fast-paced environment.
• Excellent verbal and written communication skills.
Click Apply Now /Contact Lianne to be considered for the Hadoop Scala Developer – Northampton role