Big Data Hadoop Spark Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Hadoop Spark Developer in New York City, NY, on a long-term contract. Requires 5–8 years in Big Data, strong Apache Spark expertise, and proficiency in Hadoop tools. Programming skills in Java, Scala, or Python are essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 23, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Business Analysis #Code Reviews #YARN (Yet Another Resource Negotiator) #Data Pipeline #Big Data #Data Security #Programming #Hadoop #SQL (Structured Query Language) #Spark (Apache Spark) #Apache Spark #"ETL (Extract #Transform #Load)" #Scrum #Data Ingestion #Python #Batch #Data Processing #HDFS (Hadoop Distributed File System) #Data Architecture #HBase #Java #Agile #Scala #Security #Sqoop (Apache Sqoop) #Data Governance
Role description
Job Title: Big Data Hadoop Spark Developer Location: New York City, NY (Onsite) Duration: Long-term Contract About the Role We are seeking a highly skilled Big Data Hadoop Developer with expertise in Apache Spark to join its technology team in New York. The candidate will be responsible for designing, building, and optimizing large-scale data processing systems to support critical business applications and analytics. Responsibilities: • Design and develop data ingestion, processing, and transformation pipelines using Hadoop ecosystem tools (HDFS, Hive, HBase, Oozie, Sqoop, Flume). • Build and optimize distributed data processing applications using Apache Spark (Core, SQL, Streaming). • Work with structured and unstructured data to develop scalable and high-performance solutions. • Collaborate with data architects, business analysts, and application teams to understand requirements and deliver robust solutions. • Implement best practices for performance tuning, data security, and data governance. • Troubleshoot and resolve issues related to Hadoop clusters, Spark jobs, and data pipelines. • Participate in Agile/Scrum development cycles, contributing to sprint planning, code reviews, and technical discussions. Required Skills: • 5–8 years of experience in Big Data / Hadoop ecosystem. • Strong expertise in Apache Spark (Core, SQL, Streaming) for batch and real-time processing. • Hands-on experience with HDFS, Hive, HBase, Sqoop, Oozie, Flume, and YARN. • Solid programming skills in Java / Scala / Python.