

Senior Hadoop Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Hadoop Developer in Jacksonville, FL (Hybrid – 2 days/week onsite), offering $60–$65/hr on W-2. Requires 5+ years in the Hadoop ecosystem, strong Spark (Scala/Java) skills, and experience with Apache Iceberg and CDC tools.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date discovered
August 9, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Jacksonville, FL
-
🧠 - Skills detailed
#Hadoop #Debugging #Python #Documentation #Unit Testing #JSON (JavaScript Object Notation) #Code Reviews #GIT #Data Ingestion #Data Science #Big Data #Java #Scala #Spark (Apache Spark) #Computer Science #Apache Iceberg #Agile #Deployment #HDFS (Hadoop Distributed File System)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Senior Hadoop Developer
Location: Jacksonville, FL (Hybrid – 2 days/week onsite)
Rate: $60–$65/hr on W-2 (No C2C)
Position Overview:
We are seeking a Senior Hadoop Developer to design, build, and support scalable big data solutions, focusing on ingestion pattern development and Spark-based orchestration. You’ll play a critical role in managing large-scale data workflows, collaborating across teams to ensure efficient data ingestion, processing, and validation.
Key Responsibilities:
• Design and implement modern ingestion patterns using Spark, ensuring scalability and performance
• Build and maintain JARs for file parsing (CSV, JSON, Avro, Parquet) and writing to Iceberg
• Integrate ingestion frameworks with Recon and CDC tools for seamless data validation and real-time processing
• Collaborate with engineers, data scientists, and business teams to align ingestion solutions with broader architecture
• Perform code reviews, maintain quality standards, and lead troubleshooting and optimization efforts
• Participate in unit testing, documentation, deployment, and lifecycle management
Must-Have Skills & Experience:
• 5+ years working within the Hadoop ecosystem (HDFS, Spark, Hive)
• Strong hands-on experience with Spark (Scala/Java)
• Proficiency in Java and/or Scala
• Experience with Apache Iceberg and data warehousing concepts
• Familiarity with Change Data Capture (CDC) tools and Recon frameworks
• Experience building and deploying JARs
• Strong understanding of ingestion patterns, data validation, and quality processes
Additional Requirements:
• Strong analytical, problem-solving, and debugging skills
• Familiarity with SDLC and Agile environments
• Excellent verbal and written communication
• Ability to work independently and within a cross-functional team
• Proficiency with tools like Git, Visio, and MS Office Suite
Preferred Technical Stack:
• Languages/Tools: Spark, Scala, Python, Java
• Frameworks: Iceberg, CDC tools, Recon Frameworks
• Platforms: Hadoop (HDFS), Hive
Education:
• Bachelor’s degree in Computer Science, Information Systems, or related field — or equivalent work experience