Hadoop Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop Engineer with a contract until November 2025, offering up to £500 per day. Located in Birmingham or Sheffield (Hybrid), it requires 5+ years in Hadoop, strong skills in Python, Apache Airflow, and Spark Streaming.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
500
-
🗓️ - Date discovered
August 20, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Inside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
South Yorkshire, England, United Kingdom
-
🧠 - Skills detailed
#Apache Airflow #Linux #Spark (Apache Spark) #Deployment #Python #Data Engineering #Hadoop #Monitoring #Puppet #Security #Compliance #HDFS (Hadoop Distributed File System) #Logging #Data Security #Automation #Ansible #Shell Scripting #Programming #Airflow #"ETL (Extract #Transform #Load)" #Scripting #Data Pipeline #YARN (Yet Another Resource Negotiator) #HBase #Storage #Scala
Role description
Hadoop Engineer Location: Birmingham or Sheffield (Hybrid, 2 days onsite p/w) Contract End Date: November 2025 Rate: Up to £500 per day Inside IR35 About the Role We’re looking for experienced Hadoop Engineers to join our team and help drive the development, optimisation, and support of our on-premises Operational Data Platform (ODP). You will work on complex, enterprise-scale data environments, ensuring performance, security, and reliability for critical operational data workloads. If you have deep expertise in the Hadoop ecosystem, strong programming skills, and a passion for building robust data pipelines, this is your chance to work on a high-impact platform supporting mission-critical analytics. Key Responsibilities • Design, develop, and maintain scalable data pipelines using Hadoop technologies in an on-premises environment. • Build and optimise real-time processing workflows using Apache Airflow and Spark Streaming. • Develop automation and transformation solutions in Python. • Collaborate with infrastructure and analytics teams to deliver operational data solutions. • Monitor, troubleshoot, and fine-tune jobs to ensure platform reliability and performance. • Ensure adherence to enterprise security, compliance, and governance standards. Required Skills & Experience • 5+ years’ experience in Hadoop and data engineering roles. • Proven hands-on expertise with Python, Apache Airflow, and Spark Streaming. • Strong knowledge of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. • Experience with Linux systems, shell scripting, and enterprise deployment tools. • Exposure to infrastructure-level or operational data analytics. • Familiarity with monitoring and logging tools in on-prem environments. Preferred Qualifications • Experience with enterprise ODP platforms or large-scale data systems. • Knowledge of configuration management tools (e.g., Ansible, Puppet) and on-prem CI/CD pipelines. • Understanding of network and storage architecture in data centers. • Awareness of data security, compliance, and audit requirements in regulated industries.