Optomi

Scala Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Scala Developer" on a 12-month remote contract, paying W2 only. Requires 5+ years in software engineering, expertise in Apache Spark, HDFS, SQL, and prior public sector or healthcare project experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
400
-
πŸ—“οΈ - Date
October 23, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Consulting #Distributed Computing #Data Management #Programming #Big Data #Data Pipeline #Apache Spark #Data Framework #HDFS (Hadoop Distributed File System) #Cloudera #Computer Science #Data Governance #Cloud #Data Processing #Scala #Hadoop #Spark (Apache Spark) #Java #SQL (Structured Query Language) #Data Architecture
Role description
Scala Developer – Big Data & Analytics | Remote (U.S.-based) | 12-Month Contract (Extension Likely) | W2 Only! There are no Corp-to-Corp options or Visa Sponsorship available for this position. Optomi, in partnership with a leading consulting firm, is seeking a Scala Developer to support a critical data initiative for a large-scale public sector program. This fully remote role is part of a strategic Data Management and Analytics project and offers the opportunity to work on high-impact data solutions. Candidates will join a collaborative delivery team focused on processing high-volume healthcare encounter claims data using modern Big Data technologies. What the right candidate will enjoy: β€’ Long-term contract with strong potential for extension! β€’ High-impact work with large-scale data initiatives in the public sector! β€’ Access to modern Big Data tools and enterprise-scale architecture! β€’ Collaborative, cross-functional team environment with both consulting and government stakeholders! Experience of the right candidate: β€’ Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). β€’ 5+ years of experience in software engineering, including strong functional programming expertise. β€’ Advanced experience working with Apache Spark using Scala for large-scale data processing. β€’ Proficiency with HDFS and SQL in a Big Data ecosystem. β€’ Solid understanding of Object-Oriented Programming principles. β€’ Experience working in distributed computing environments such as Hadoop or Cloudera. β€’ Familiarity with Drools and Java EE/J2EE frameworks. Preferred Qualifications: β€’ Prior experience on public sector or healthcare data projects. β€’ Understanding of claims data or healthcare transaction processing (e.g., encounter data). β€’ Experience optimizing Spark jobs for performance and reliability. β€’ Familiarity with rule engines, enterprise service buses, and data governance concepts. Responsibilities of the right candidate: β€’ Design, develop, and maintain Scala-based applications to process high-volume healthcare claims data. β€’ Build and optimize Spark-based data pipelines that align with business and regulatory requirements. β€’ Collaborate with cross-functional teams, including data architects, QA, and project stakeholders. β€’ Participate in validation, testing, and performance tuning of data workflows. β€’ Ensure code quality, reusability, and adherence to best practices in distributed computing. β€’ Support integration of systems across the Big Data ecosystem (Hadoop, Cloudera, HDFS, etc.). What we’re looking for: β€’ Strong engineering fundamentals with a focus on functional and distributed programming. β€’ Hands-on experience with modern data platforms and Big Data frameworks. β€’ Ability to translate business requirements into scalable technical solutions. β€’ Effective communication and teamwork across technical and non-technical stakeholders. β€’ Self-motivated professional with a mindset for continuous learning and improvement.