Hadoop Engineer (ODP Platform)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop Engineer (ODP Platform) on a remote contract for 6+ months, offering a pay rate of "unknown." Key skills include Apache Spark, Python, and experience with the Big Data Hadoop ecosystem.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 15, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Inside IR35
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#HDFS (Hadoop Distributed File System) #Pig #YARN (Yet Another Resource Negotiator) #Hadoop #Apache Spark #Big Data #Datasets #Data Modeling #Airflow #Spark (Apache Spark) #Python #Data Architecture #Scala #Data Processing #Kafka (Apache Kafka) #Data Engineering #HBase #"ETL (Extract #Transform #Load)" #Data Ingestion #Apache Airflow
Role description
πŸš€ We’re Hiring: Hadoop Engineer – ODP Platform πŸš€ πŸ“ Mode: Remote | Type: Contract Inside IR35 / FTE (6 Months+ ext) Key Responsibilities: β€’ Hands-on development with Hadoop ecosystem (HDFS, YARN, MapReduce) and ODP (Open Data Platform) stack. β€’ Build and optimize ETL pipelines and scalable data ingestion frameworks. β€’ Develop scripts and pipelines using Python and Apache Airflow. β€’ Implement real-time data processing using Apache Spark Streaming. β€’ Work with Hive, HBase, Pig, Kafka for big data solutions. β€’ Analyze telemetry logs and performance data to extract actionable insights. β€’ Optimize large-scale datasets through data modeling, partitioning, and transformation. Mandatory Skills: βœ… Apache Spark βœ… Python βœ… Big Data Hadoop Ecosystem βœ… Data Architecture πŸ’‘ If you’re passionate about large-scale data engineering and want to work on cutting-edge big data platforms, let’s connect! πŸ“© Apply now or mail me at snavlani@redglobal.com me for more details.