Iris Software Inc.

Scala Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Scala Developer with 9+ years of experience, focusing on financial services. It's a long-term contract based in Whippany, NJ (Hybrid). Key skills include Scala, Spark, data pipelines, and cloud platforms (Azure, AWS, GCP).
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 25, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New Jersey, United States
-
🧠 - Skills detailed
#Scala #Apache Spark #Hadoop #Storage #Schema Design #Big Data #HDFS (Hadoop Distributed File System) #Kafka (Apache Kafka) #Scrum #Cloud #ML (Machine Learning) #Datasets #GIT #Azure DevOps #Data Pipeline #Automation #Delta Lake #Spark (Apache Spark) #Data Engineering #Data Processing #DevOps #Data Ingestion #Databricks #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Azure #Data Architecture #GCP (Google Cloud Platform) #Jenkins #AWS (Amazon Web Services) #Agile
Role description
Greetings! We’re looking for an experienced Scala Developer who can design and build high-performance data processing systems within a large-scale distributed environment. You’ll work closely with Data Engineers, Architects, and Product teams to develop reliable data pipelines and contribute to our enterprise Big Data platform. Location: Whippany NJ (Hybrid – 3 days onsite) Type: Long term Contract Domain: Financial Services Key Responsibilities β€’ Design, develop, and maintain scalable data pipelines using Scala and Spark β€’ Implement efficient ETL/ELT solutions for large datasets across Hadoop, Databricks, or EMR environments β€’ Optimize data processing performance and ensure code quality through testing and best practices β€’ Collaborate with cross-functional teams on data architecture, schema design, and performance tuning β€’ Develop reusable components and frameworks to support analytics and machine learning workloads β€’ Work with Kafka, Hive, HDFS, and Delta Lake for data ingestion and storage β€’ Contribute to CI/CD and automation using Git, Jenkins, or Azure DevOps Required Skills & Experience β€’ 9+ years of hands-on experience in Scala development (functional and object-oriented) β€’ 5+ years of experience with Apache Spark (core, SQL, structured streaming) β€’ Solid understanding of distributed data processing and storage systems (HDFS, Hive, Delta Lake) β€’ Experience with Kafka or other event streaming technologies β€’ Strong SQL and performance optimization skills β€’ Familiarity with cloud platforms such as Azure, AWS, or GCP (preferably with Databricks) β€’ Experience working in Agile/Scrum teams Best Regards,