REMOTE Hadoop Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a REMOTE Hadoop Developer on a 5-month contract, offering $50-60/hr. Candidates must have 5+ years in software development, strong Hadoop, Spark, and Scala skills, and a Bachelor's in a related field.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
480
-
πŸ—“οΈ - Date discovered
May 22, 2025
πŸ•’ - Project duration
3 to 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#GIT #Business Analysis #Hadoop #Debugging #Programming #Python #Visualization #Computer Science #Unit Testing #Cloud #GCP (Google Cloud Platform) #Spark (Apache Spark) #Version Control #Deployment #Documentation #Agile #Big Data #Scala #Azure HDInsight #Migration #AWS EMR (Amazon Elastic MapReduce) #Java #AWS (Amazon Web Services) #Azure
Role description
Job Title: Hadoop Developer Location: Remote Pay Rate: 50-60/hr on w-2 Duration: Contract 5 months Position Summary: We are seeking a skilled Hadoop Developer with a strong background in big data ecosystems, particularly Spark and Scala, to join our IT development team. This role is responsible for the design, development, implementation, and maintenance of data-centric systems that handle high volumes of structured and unstructured data. The ideal candidate will be experienced in full-stack development within big data platforms, have a passion for writing clean, scalable code, and be comfortable working within a fast-paced, collaborative environment. Key Responsibilities: β€’ Develop, test, and maintain big data applications using Hadoop, Spark, and Scala. β€’ Write efficient and scalable code for moderately complex system designs and applications. β€’ Review and refine code from peers to ensure adherence to programming standards. β€’ Write and maintain technical specifications and documentation for subsystems and APIs. β€’ Perform unit testing, debug code, and ensure code integrity through version control. β€’ Collaborate with business analysts, systems analysts, project managers, and other IT developers to deliver high-quality solutions. β€’ Support application migration and deployment across different environments. β€’ Ensure all developed solutions comply with IT controls and data protection best practices. Required Qualifications: β€’ 5+ years of professional experience in software development and system design. β€’ Strong development experience with Hadoop ecosystems. β€’ Advanced proficiency with Spark and Scala. β€’ Solid understanding of unit testing, version control (Git or similar), and Agile SDLC practices. β€’ Proficiency in debugging tools, documentation, and technical specification writing. β€’ Strong analytical, problem-solving, and communication skills. β€’ Ability to work independently and manage multiple development tasks simultaneously. β€’ Bachelor's degree in Computer Science, Information Systems, or related field – or equivalent work experience. Preferred Skills: β€’ Familiarity with data visualization and reporting tools. β€’ Knowledge of Python or Java in a big data environment. β€’ Experience with cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight, GCP Dataproc). Additional Information: β€’ This is a W-2 only position (No C2C or 3rd-party candidates will be considered). β€’ Candidates must be authorized to work in the U.S. without sponsorship.