Optomi

Scala Developer

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Scala Developer" on a 12-month remote contract, paying W2 only. Requires 5+ years in software engineering, expertise in Apache Spark, HDFS, SQL, and prior public sector or healthcare project experience.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
400
-
๐Ÿ—“๏ธ - Date
October 23, 2025
๐Ÿ•’ - Duration
More than 6 months
-
๐Ÿ๏ธ - Location
Remote
-
๐Ÿ“„ - Contract
W2 Contractor
-
๐Ÿ”’ - Security
Unknown
-
๐Ÿ“ - Location detailed
United States
-
๐Ÿง  - Skills detailed
#Consulting #Distributed Computing #Data Management #Programming #Big Data #Data Pipeline #Apache Spark #Data Framework #HDFS (Hadoop Distributed File System) #Cloudera #Computer Science #Data Governance #Cloud #Data Processing #Scala #Hadoop #Spark (Apache Spark) #Java #SQL (Structured Query Language) #Data Architecture
Role description
Scala Developer โ€“ Big Data & Analytics | Remote (U.S.-based) | 12-Month Contract (Extension Likely) | W2 Only! There are no Corp-to-Corp options or Visa Sponsorship available for this position. Optomi, in partnership with a leading consulting firm, is seeking a Scala Developer to support a critical data initiative for a large-scale public sector program. This fully remote role is part of a strategic Data Management and Analytics project and offers the opportunity to work on high-impact data solutions. Candidates will join a collaborative delivery team focused on processing high-volume healthcare encounter claims data using modern Big Data technologies. What the right candidate will enjoy: โ€ข Long-term contract with strong potential for extension! โ€ข High-impact work with large-scale data initiatives in the public sector! โ€ข Access to modern Big Data tools and enterprise-scale architecture! โ€ข Collaborative, cross-functional team environment with both consulting and government stakeholders! Experience of the right candidate: โ€ข Bachelorโ€™s degree in Computer Science, Engineering, or related field (or equivalent experience). โ€ข 5+ years of experience in software engineering, including strong functional programming expertise. โ€ข Advanced experience working with Apache Spark using Scala for large-scale data processing. โ€ข Proficiency with HDFS and SQL in a Big Data ecosystem. โ€ข Solid understanding of Object-Oriented Programming principles. โ€ข Experience working in distributed computing environments such as Hadoop or Cloudera. โ€ข Familiarity with Drools and Java EE/J2EE frameworks. Preferred Qualifications: โ€ข Prior experience on public sector or healthcare data projects. โ€ข Understanding of claims data or healthcare transaction processing (e.g., encounter data). โ€ข Experience optimizing Spark jobs for performance and reliability. โ€ข Familiarity with rule engines, enterprise service buses, and data governance concepts. Responsibilities of the right candidate: โ€ข Design, develop, and maintain Scala-based applications to process high-volume healthcare claims data. โ€ข Build and optimize Spark-based data pipelines that align with business and regulatory requirements. โ€ข Collaborate with cross-functional teams, including data architects, QA, and project stakeholders. โ€ข Participate in validation, testing, and performance tuning of data workflows. โ€ข Ensure code quality, reusability, and adherence to best practices in distributed computing. โ€ข Support integration of systems across the Big Data ecosystem (Hadoop, Cloudera, HDFS, etc.). What weโ€™re looking for: โ€ข Strong engineering fundamentals with a focus on functional and distributed programming. โ€ข Hands-on experience with modern data platforms and Big Data frameworks. โ€ข Ability to translate business requirements into scalable technical solutions. โ€ข Effective communication and teamwork across technical and non-technical stakeholders. โ€ข Self-motivated professional with a mindset for continuous learning and improvement.