

Hadoop Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Hadoop Developer with a contract through December 2025, hybrid in Jacksonville, FL. Requires 5+ years in Hadoop, Spark, Scala, and Iceberg. Strong design, ingestion pattern, and data processing experience essential.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
August 13, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Jacksonville, FL 32246
-
π§ - Skills detailed
#Strategy #Data Processing #HDFS (Hadoop Distributed File System) #JSON (JavaScript Object Notation) #Documentation #Programming #Hadoop #Apache Spark #Java #Unit Testing #Data Science #Code Reviews #Python #Data Reconciliation #Data Engineering #Quality Assurance #Computer Science #Data Quality #Spark (Apache Spark) #Scala #Kafka (Apache Kafka) #Data Ingestion #Base #Debugging #Data Architecture #Deployment
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Senior Hadoop Developer - Hybrid ARC Group has an immediate opportunity for a Senior Hadoop Developer! This position is a Hybrid role in Jacksonville, FL. This is starting out as a contract position running through December 2025 with strong potential to extend longer or convert to FTE. This is a fantastic opportunity to join a well-respected organization offering tremendous career growth potential. At ARC Group, we are committed to fostering a diverse and inclusive workplace where everyone feels valued and respected. We believe that diverse perspectives lead to better innovation and problem-solving. As an organization, we embrace diversity in all its forms and encourage individuals from underrepresented groups to apply. 100% REMOTE! Reference # 18839-1 Candidates must currently have PERMANENT US work authorization. Sorry, but we are not considering any candidates from outside companies for this position (no C2C, 3rd party / brokering).
β’ Hybrid/onsite (Jacksonville)-2 days per week
β’ Project Scope: Candidates will lead the development and implementation of the next-generation Ingestion Framework (Aka Ingestion Framework Modernization) that writes data into Iceberg. These resources will lead the design and development of the new Ingestion Framework using Apache Spark, ensuring it meets the requirements of scalability, performance, and reliability. A Sr Hadoop developer is responsible for the design, development and operations of systems that store and manage large amounts of data. Most Sr Hadoop developers have a computer software background and have a degree in information systems, software engineering, computer science. Must Have as Prior Experience:
Design and Develop Ingestion Patterns: Lead the design and development of a modernized ingestion patterns using Spark as the base orchestration engine, ensuring scalability, reliability, and performance.
Create Jars for File Parsing: Develop and maintain Jars for parsing files of various formats (e.g., CSV, JSON, Avro, Parquet) and writing to Iceberg, ensuring data quality and integrity.
Integrate with Recon Framework: Integrate the ingestion patterns with the Recon Framework through Jars, ensuring seamless data reconciliation and validation.
Integrate with CDC Tool: Integrate the ingestion patterns with the CDC (Change Data Capture) Tool, enabling real-time data ingestion and processing.
Collaborate with Cross-Functional Teams: Work closely with data engineers, data scientists, and other stakeholders to ensure the ingestion patterns and configurations meets business requirements and is aligned with overall data architecture.
Troubleshoot and Optimize: Troubleshoot issues, optimize performance, and ensure the ingestion patterns of different types of files is running efficiently and effectively.
Code Review and Quality Assurance: Perform code reviews, ensure adherence to coding standards, and maintain high-quality code.
Job Requirements:
Experience and understanding with unit testing, release procedures, coding design and documentation protocol as well as change management procedures
Proficiency using versioning tools
Thorough knowledge of Information Technology fields and computer systems
Demonstrated organizational, analytical and interpersonal skills
Flexible team player
Ability to manage tasks independently and take ownership of responsibilities
Ability to learn from mistakes and apply constructive feedback to improve performance
Must demonstrate initiative and effective independent decision-making skills
Ability to communicate technical information clearly and articulately
Ability to adapt to a rapidly changing environment
In-depth understanding of the systems development life cycle
Proficiency programming in more than one object-oriented programming language
Proficiency using standard desktop applications such as MS Suite and flowcharting tools such as Visio
Proficiency using debugging tools
High critical thinking skills to evaluate alternatives and present solutions that are consistent with business objectives and strategy
Must Have Skills:
Technical Skills:
5+ years of experience in Hadoop ecosystem (HDFS, Spark, Hive, etc.)
Strong expertise in Spark (Scala/Java) and experience with Spark-based data processing frameworks
Experience with Iceberg and data warehousing concepts
Proficiency in Java and/or Scala programming languages
Experience with JAR (Java Archive) development and deployment
Familiarity with CDC tools (Kafka, etc.) and Recon Frameworks
Proficiency in building Jars for parsing different type of files
Ingestion Patterns Experience: Proven experience in designing and developing ingestion patterns, preferably with Spark as the base orchestration engine.
Data Processing and Integration: Experience with data processing, integration, and reconciliation, including data quality and validation.
Collaboration and Communication: Excellent collaboration and communication skills, with the ability to work with cross-functional teams and stakeholders.
Problem-Solving and Troubleshooting: Strong problem-solving and troubleshooting skills, with the ability to analyze complex issues and develop effective solutions.
Specific Tools/Languages Required:
HADOOP
Spark
Scala
Python
Required Work Experience:
5+ years related work experience, Professional experience with technical design and coding in the IT industry
Required Education:
Related Bachelor's degree or related work experience
Would you like to know more about our new opportunity? For immediate consideration, please send your resume directly to Suresh Gaddala at suresh@arcgonline.com or apply online while viewing all of our open positions at www.arcgonline.com. ARC Group is a Forbes-ranked a top 20 recruiting and executive search firm working with clients nationwide to recruit the highest quality technical resources. We have achieved this by understanding both our candidate's and client's needs and goals and serving both with integrity and a shared desire to succeed. At ARC Group, we are committed to providing equal employment opportunities and fostering an inclusive work environment. We encourage applications from all qualified individuals regardless of race, ethnicity, religion, gender identity, sexual orientation, age, disability, or any other protected status. If you require accommodations during the recruitment process, please let us know. Position is offered with no fee to candidate.
hiJqnZzShV