

Sr. Hadoop Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Hadoop Developer on a W2 contract for over 6 months, hybrid (2 days onsite). Pay is $60-$65/hour. Requires 5+ years in Hadoop, expertise in Spark, Scala, Python, and experience with Iceberg and CDC tools.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date discovered
August 12, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Jacksonville, FL 32212
-
π§ - Skills detailed
#Deployment #HDFS (Hadoop Distributed File System) #Data Reconciliation #Python #Java #Base #Data Ingestion #Programming #Computer Science #Spark (Apache Spark) #Data Architecture #JSON (JavaScript Object Notation) #Apache Spark #Data Quality #Data Processing #Debugging #Unit Testing #Kafka (Apache Kafka) #Quality Assurance #API (Application Programming Interface) #Data Engineering #Hadoop #Data Science #Business Analysis #Code Reviews #Scala
Role description
Sr. Hadoop Developer (W2 Contract - US Citizen/Permanent Residents Only) - Hybrid 2 Days/Week OnsiteLead the development and implementation of the next-generation Ingestion Framework that writes data into Iceberg. These resources will lead the design and development of the new Ingestion Framework using Apache Spark, ensuring it meets the requirements of scalability, performance, and reliability.
Responsible for the design, development and operations of systems that store and manage large amounts of data. Most Sr Hadoop developers have a computer software background and have a degree in information systems, software engineering, computer science.
Responsibilities
Write code for moderately to complex system designs. Write programs that span platforms. Code and/or create Application Programming Interfaces (APIs).
Write code for enhancing existing programs or developing new programs.
Review code developed by IT Developers.
Provide input to and drive programming standards.
Write detailed technical specifications for subsystems. Identify integration points.
Report missing elements found in system and functional requirements and explain impacts on subsystem to team members.
Consult with others Sr IT Developer, IT Developers, Business Analysts, Systems Analysts, Project Managers, and vendors.
Scope time, resources, etc., required to complete programming projects. Seek review from other Sr IT Developers, Business Analysts, Systems Analysts or Project Managers on estimates.
Perform unit testing and debugging. Set test conditions based upon code specifications. May need assistance from other IT Developers and team members to debug more complex errors.
Supports transition of application throughout the Product Development life cycle. Document what has to be migrated. May require more coordination points for subsystems.
Researches vendor products / alternatives. Conducts vendor product gap analysis / comparison.
Accountable for including IT Controls and following standard corporate practices to protect the confidentiality, integrity, as well as availability of the application and data processed or output by the application.
Must Have
Design and Develop Ingestion Patterns: Lead the design and development of a modernized ingestion patterns using Spark as the base orchestration engine, ensuring scalability, reliability, and performance.
Create Jars for File Parsing: Develop and maintain Jars for parsing files of various formats (e.g., CSV, JSON, Avro, Parquet) and writing to Iceberg, ensuring data quality and integrity.
Integrate with Recon Framework: Integrate the ingestion patterns with the Recon Framework through Jars, ensuring seamless data reconciliation and validation.
Integrate with CDC Tool: Integrate the ingestion patterns with the CDC (Change Data Capture) Tool, enabling real-time data ingestion and processing.
Collaborate with Cross-Functional Teams: Work closely with data engineers, data scientists, and other stakeholders to ensure the ingestion patterns and configurations meets business requirements and is aligned with overall data architecture.
Troubleshoot and Optimize: Troubleshoot issues, optimize performance, and ensure the ingestion patterns of different types of files is running efficiently and effectively.
Code Review and Quality Assurance: Perform code reviews, ensure adherence to coding standards, and maintain high-quality code.
Specific Tools/Languages Required:
HADOOP
Spark
Scala
Python
Required Work Experience
5+ years of experience in Hadoop ecosystem (HDFS, Spark, Hive, etc.)
Strong expertise in Spark (Scala/Java) and experience with Spark-based data processing frameworks
Experience with Iceberg and data warehousing concepts
Proficiency in Java and/or Scala programming languages
Experience with JAR (Java Archive) development and deployment
Familiarity with CDC tools (Kafka, etc.) and Recon Frameworks
Proficiency in building Jars for parsing different type of files
Ingestion Patterns Experience: Proven experience in designing and developing ingestion patterns, preferably with Spark as the base orchestration engine.
Data Processing and Integration: Experience with data processing, integration, and reconciliation, including data quality and validation.
Collaboration and Communication: Excellent collaboration and communication skills, with the ability to work with cross-functional teams and stakeholders.
Problem-Solving and Troubleshooting: Strong problem-solving and troubleshooting skills, with the ability to analyze complex issues and develop effective solutions.
Required Education
Related Bachelors degree or related work experience
INDMW
Job Types: Full-time, Contract
Pay: $60.00 - $65.00 per hour
Benefits:
Dental insurance
Health insurance
Vision insurance
Work Location: In person
Sr. Hadoop Developer (W2 Contract - US Citizen/Permanent Residents Only) - Hybrid 2 Days/Week OnsiteLead the development and implementation of the next-generation Ingestion Framework that writes data into Iceberg. These resources will lead the design and development of the new Ingestion Framework using Apache Spark, ensuring it meets the requirements of scalability, performance, and reliability.
Responsible for the design, development and operations of systems that store and manage large amounts of data. Most Sr Hadoop developers have a computer software background and have a degree in information systems, software engineering, computer science.
Responsibilities
Write code for moderately to complex system designs. Write programs that span platforms. Code and/or create Application Programming Interfaces (APIs).
Write code for enhancing existing programs or developing new programs.
Review code developed by IT Developers.
Provide input to and drive programming standards.
Write detailed technical specifications for subsystems. Identify integration points.
Report missing elements found in system and functional requirements and explain impacts on subsystem to team members.
Consult with others Sr IT Developer, IT Developers, Business Analysts, Systems Analysts, Project Managers, and vendors.
Scope time, resources, etc., required to complete programming projects. Seek review from other Sr IT Developers, Business Analysts, Systems Analysts or Project Managers on estimates.
Perform unit testing and debugging. Set test conditions based upon code specifications. May need assistance from other IT Developers and team members to debug more complex errors.
Supports transition of application throughout the Product Development life cycle. Document what has to be migrated. May require more coordination points for subsystems.
Researches vendor products / alternatives. Conducts vendor product gap analysis / comparison.
Accountable for including IT Controls and following standard corporate practices to protect the confidentiality, integrity, as well as availability of the application and data processed or output by the application.
Must Have
Design and Develop Ingestion Patterns: Lead the design and development of a modernized ingestion patterns using Spark as the base orchestration engine, ensuring scalability, reliability, and performance.
Create Jars for File Parsing: Develop and maintain Jars for parsing files of various formats (e.g., CSV, JSON, Avro, Parquet) and writing to Iceberg, ensuring data quality and integrity.
Integrate with Recon Framework: Integrate the ingestion patterns with the Recon Framework through Jars, ensuring seamless data reconciliation and validation.
Integrate with CDC Tool: Integrate the ingestion patterns with the CDC (Change Data Capture) Tool, enabling real-time data ingestion and processing.
Collaborate with Cross-Functional Teams: Work closely with data engineers, data scientists, and other stakeholders to ensure the ingestion patterns and configurations meets business requirements and is aligned with overall data architecture.
Troubleshoot and Optimize: Troubleshoot issues, optimize performance, and ensure the ingestion patterns of different types of files is running efficiently and effectively.
Code Review and Quality Assurance: Perform code reviews, ensure adherence to coding standards, and maintain high-quality code.
Specific Tools/Languages Required:
HADOOP
Spark
Scala
Python
Required Work Experience
5+ years of experience in Hadoop ecosystem (HDFS, Spark, Hive, etc.)
Strong expertise in Spark (Scala/Java) and experience with Spark-based data processing frameworks
Experience with Iceberg and data warehousing concepts
Proficiency in Java and/or Scala programming languages
Experience with JAR (Java Archive) development and deployment
Familiarity with CDC tools (Kafka, etc.) and Recon Frameworks
Proficiency in building Jars for parsing different type of files
Ingestion Patterns Experience: Proven experience in designing and developing ingestion patterns, preferably with Spark as the base orchestration engine.
Data Processing and Integration: Experience with data processing, integration, and reconciliation, including data quality and validation.
Collaboration and Communication: Excellent collaboration and communication skills, with the ability to work with cross-functional teams and stakeholders.
Problem-Solving and Troubleshooting: Strong problem-solving and troubleshooting skills, with the ability to analyze complex issues and develop effective solutions.
Required Education
Related Bachelors degree or related work experience
INDMW
Job Types: Full-time, Contract
Pay: $60.00 - $65.00 per hour
Benefits:
Dental insurance
Health insurance
Vision insurance
Work Location: In person