

Optomi
Scala Developer
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Scala Developer" on a 12-month remote contract, paying W2 only. Requires 5+ years in software engineering, expertise in Apache Spark, HDFS, SQL, and prior public sector or healthcare project experience.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
400
-
๐๏ธ - Date
October 23, 2025
๐ - Duration
More than 6 months
-
๐๏ธ - Location
Remote
-
๐ - Contract
W2 Contractor
-
๐ - Security
Unknown
-
๐ - Location detailed
United States
-
๐ง - Skills detailed
#Consulting #Distributed Computing #Data Management #Programming #Big Data #Data Pipeline #Apache Spark #Data Framework #HDFS (Hadoop Distributed File System) #Cloudera #Computer Science #Data Governance #Cloud #Data Processing #Scala #Hadoop #Spark (Apache Spark) #Java #SQL (Structured Query Language) #Data Architecture
Role description
Scala Developer โ Big Data & Analytics | Remote (U.S.-based) | 12-Month Contract (Extension Likely) | W2 Only!
There are no Corp-to-Corp options or Visa Sponsorship available for this position.
Optomi, in partnership with a leading consulting firm, is seeking a Scala Developer to support a critical data initiative for a large-scale public sector program. This fully remote role is part of a strategic Data Management and Analytics project and offers the opportunity to work on high-impact data solutions. Candidates will join a collaborative delivery team focused on processing high-volume healthcare encounter claims data using modern Big Data technologies.
What the right candidate will enjoy:
โข Long-term contract with strong potential for extension!
โข High-impact work with large-scale data initiatives in the public sector!
โข Access to modern Big Data tools and enterprise-scale architecture!
โข Collaborative, cross-functional team environment with both consulting and government stakeholders!
Experience of the right candidate:
โข Bachelorโs degree in Computer Science, Engineering, or related field (or equivalent experience).
โข 5+ years of experience in software engineering, including strong functional programming expertise.
โข Advanced experience working with Apache Spark using Scala for large-scale data processing.
โข Proficiency with HDFS and SQL in a Big Data ecosystem.
โข Solid understanding of Object-Oriented Programming principles.
โข Experience working in distributed computing environments such as Hadoop or Cloudera.
โข Familiarity with Drools and Java EE/J2EE frameworks.
Preferred Qualifications:
โข Prior experience on public sector or healthcare data projects.
โข Understanding of claims data or healthcare transaction processing (e.g., encounter data).
โข Experience optimizing Spark jobs for performance and reliability.
โข Familiarity with rule engines, enterprise service buses, and data governance concepts.
Responsibilities of the right candidate:
โข Design, develop, and maintain Scala-based applications to process high-volume healthcare claims data.
โข Build and optimize Spark-based data pipelines that align with business and regulatory requirements.
โข Collaborate with cross-functional teams, including data architects, QA, and project stakeholders.
โข Participate in validation, testing, and performance tuning of data workflows.
โข Ensure code quality, reusability, and adherence to best practices in distributed computing.
โข Support integration of systems across the Big Data ecosystem (Hadoop, Cloudera, HDFS, etc.).
What weโre looking for:
โข Strong engineering fundamentals with a focus on functional and distributed programming.
โข Hands-on experience with modern data platforms and Big Data frameworks.
โข Ability to translate business requirements into scalable technical solutions.
โข Effective communication and teamwork across technical and non-technical stakeholders.
โข Self-motivated professional with a mindset for continuous learning and improvement.
Scala Developer โ Big Data & Analytics | Remote (U.S.-based) | 12-Month Contract (Extension Likely) | W2 Only!
There are no Corp-to-Corp options or Visa Sponsorship available for this position.
Optomi, in partnership with a leading consulting firm, is seeking a Scala Developer to support a critical data initiative for a large-scale public sector program. This fully remote role is part of a strategic Data Management and Analytics project and offers the opportunity to work on high-impact data solutions. Candidates will join a collaborative delivery team focused on processing high-volume healthcare encounter claims data using modern Big Data technologies.
What the right candidate will enjoy:
โข Long-term contract with strong potential for extension!
โข High-impact work with large-scale data initiatives in the public sector!
โข Access to modern Big Data tools and enterprise-scale architecture!
โข Collaborative, cross-functional team environment with both consulting and government stakeholders!
Experience of the right candidate:
โข Bachelorโs degree in Computer Science, Engineering, or related field (or equivalent experience).
โข 5+ years of experience in software engineering, including strong functional programming expertise.
โข Advanced experience working with Apache Spark using Scala for large-scale data processing.
โข Proficiency with HDFS and SQL in a Big Data ecosystem.
โข Solid understanding of Object-Oriented Programming principles.
โข Experience working in distributed computing environments such as Hadoop or Cloudera.
โข Familiarity with Drools and Java EE/J2EE frameworks.
Preferred Qualifications:
โข Prior experience on public sector or healthcare data projects.
โข Understanding of claims data or healthcare transaction processing (e.g., encounter data).
โข Experience optimizing Spark jobs for performance and reliability.
โข Familiarity with rule engines, enterprise service buses, and data governance concepts.
Responsibilities of the right candidate:
โข Design, develop, and maintain Scala-based applications to process high-volume healthcare claims data.
โข Build and optimize Spark-based data pipelines that align with business and regulatory requirements.
โข Collaborate with cross-functional teams, including data architects, QA, and project stakeholders.
โข Participate in validation, testing, and performance tuning of data workflows.
โข Ensure code quality, reusability, and adherence to best practices in distributed computing.
โข Support integration of systems across the Big Data ecosystem (Hadoop, Cloudera, HDFS, etc.).
What weโre looking for:
โข Strong engineering fundamentals with a focus on functional and distributed programming.
โข Hands-on experience with modern data platforms and Big Data frameworks.
โข Ability to translate business requirements into scalable technical solutions.
โข Effective communication and teamwork across technical and non-technical stakeholders.
โข Self-motivated professional with a mindset for continuous learning and improvement.






