Hadoop Engineer - ODP Platform

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop Engineer - ODP Platform, hybrid working in Birmingham/Sheffield, with a contract until 28/11/2025. Pay rate is competitive. Requires 5+ years in Hadoop, strong Python and Apache Airflow skills, and on-premises experience.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 12, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
West Midlands, England, United Kingdom
-
🧠 - Skills detailed
#Deployment #HDFS (Hadoop Distributed File System) #Python #Compliance #Storage #Shell Scripting #Data Governance #Programming #Scripting #Spark (Apache Spark) #Logging #Linux #YARN (Yet Another Resource Negotiator) #HBase #Ansible #Automation #Apache Airflow #Data Processing #Airflow #Puppet #Data Engineering #Hadoop #"ETL (Extract #Transform #Load)" #Scala #Data Pipeline #Data Security #Security #Monitoring
Role description
Role Title: Hadoop Engineer / ODP Platform Location: Birmingham / Sheffield - Hybrid working with 3 days onsite per week End Date: 28/11/2025 Role Overview We are seeking a highly skilled Hadoop Engineer to support and enhance our Operational Data Platform (ODP) deployed in an on-premises environment. The ideal candidate will have extensive experience in the Hadoop ecosystem, strong programming skills, and a solid understanding of infrastructure-level data analytics. This role focuses on building and maintaining scalable, secure, and high-performance data pipelines within enterprise-grade on-prem systems. Key Responsibilities β€’ Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure. β€’ Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing. β€’ Develop robust data engineering solutions using Python for automation and transformation. β€’ Collaborate with infrastructure and analytics teams to support operational data use cases. β€’ Monitor and troubleshoot data jobs, ensuring reliability and performance across the platform. β€’ Ensure compliance with enterprise security and data governance standards. Required Skills & Experience β€’ Minimum 5 years of experience in Hadoop and data engineering. β€’ Strong hands-on experience with Python, Apache Airflow, and Spark Streaming. β€’ Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments. β€’ Exposure to data analytics, preferably involving infrastructure or operational data. β€’ Experience working with Linux systems, shell scripting, and enterprise-grade deployment tools. β€’ Familiarity with monitoring and logging tools relevant to on-prem setups. Preferred Qualifications β€’ Experience with enterprise ODP platforms or similar large-scale data systems. β€’ Knowledge of configuration management tools (e.g., Ansible, Puppet) and CI/CD in on-prem environments. β€’ Understanding of network and storage architecture in data centers. β€’ Familiarity with data security, compliance, and audit requirements in regulated industries.