

Hadoop Engineer (ODP Platform)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop Engineer (ODP Platform) on a remote contract for 6+ months, offering a pay rate of "unknown." Key skills include Apache Spark, Python, and experience with the Big Data Hadoop ecosystem.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
August 15, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Inside IR35
-
π - Security clearance
Unknown
-
π - Location detailed
United Kingdom
-
π§ - Skills detailed
#HDFS (Hadoop Distributed File System) #Pig #YARN (Yet Another Resource Negotiator) #Hadoop #Apache Spark #Big Data #Datasets #Data Modeling #Airflow #Spark (Apache Spark) #Python #Data Architecture #Scala #Data Processing #Kafka (Apache Kafka) #Data Engineering #HBase #"ETL (Extract #Transform #Load)" #Data Ingestion #Apache Airflow
Role description
π Weβre Hiring: Hadoop Engineer β ODP Platform π
π Mode: Remote | Type: Contract Inside IR35 / FTE (6 Months+ ext)
Key Responsibilities:
β’ Hands-on development with Hadoop ecosystem (HDFS, YARN, MapReduce) and ODP (Open Data Platform) stack.
β’ Build and optimize ETL pipelines and scalable data ingestion frameworks.
β’ Develop scripts and pipelines using Python and Apache Airflow.
β’ Implement real-time data processing using Apache Spark Streaming.
β’ Work with Hive, HBase, Pig, Kafka for big data solutions.
β’ Analyze telemetry logs and performance data to extract actionable insights.
β’ Optimize large-scale datasets through data modeling, partitioning, and transformation.
Mandatory Skills:
β
Apache Spark
β
Python
β
Big Data Hadoop Ecosystem
β
Data Architecture
π‘ If youβre passionate about large-scale data engineering and want to work on cutting-edge big data platforms, letβs connect!
π© Apply now or mail me at snavlani@redglobal.com me for more details.
π Weβre Hiring: Hadoop Engineer β ODP Platform π
π Mode: Remote | Type: Contract Inside IR35 / FTE (6 Months+ ext)
Key Responsibilities:
β’ Hands-on development with Hadoop ecosystem (HDFS, YARN, MapReduce) and ODP (Open Data Platform) stack.
β’ Build and optimize ETL pipelines and scalable data ingestion frameworks.
β’ Develop scripts and pipelines using Python and Apache Airflow.
β’ Implement real-time data processing using Apache Spark Streaming.
β’ Work with Hive, HBase, Pig, Kafka for big data solutions.
β’ Analyze telemetry logs and performance data to extract actionable insights.
β’ Optimize large-scale datasets through data modeling, partitioning, and transformation.
Mandatory Skills:
β
Apache Spark
β
Python
β
Big Data Hadoop Ecosystem
β
Data Architecture
π‘ If youβre passionate about large-scale data engineering and want to work on cutting-edge big data platforms, letβs connect!
π© Apply now or mail me at snavlani@redglobal.com me for more details.