Astir IT Solutions, Inc.

Scala Developer(Only W2)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Scala Developer (W2) with 10+ years of experience, located in Whippany, NJ (Hybrid). Key skills include Scala, Spark, ETL/ELT, and data processing in cloud environments. Contract length and pay rate are unspecified.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 23, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Whippany, NJ
-
🧠 - Skills detailed
#Data Processing #Data Engineering #Schema Design #SQL (Structured Query Language) #Azure #Scrum #"ETL (Extract #Transform #Load)" #Data Pipeline #Azure DevOps #GIT #AWS (Amazon Web Services) #Agile #Kafka (Apache Kafka) #Data Architecture #Delta Lake #Scala #ML (Machine Learning) #DevOps #Datasets #Spark (Apache Spark) #Hadoop #Databricks #HDFS (Hadoop Distributed File System) #Storage #Data Ingestion #Jenkins #Cloud #Big Data #Apache Spark #GCP (Google Cloud Platform) #Automation
Role description
Role : Scala Developer(Only w2) Location: Whippany NJ (Hybrid – 3 days onsite) Minimum 10+ Years of Experience required. JD: We’re looking for an experienced Scala Developer who can design and build high-performance data processing systems within a large-scale distributed environment. You’ll work closely with Data Engineers, Architects, and Product teams to develop reliable data pipelines and contribute to our enterprise Big Data platform. Key Responsibilities: β€’ Design, develop, and maintain scalable data pipelines using Scala and Spark β€’ Implement efficient ETL/ELT solutions for large datasets across Hadoop, Databricks, or EMR environments β€’ Optimize data processing performance and ensure code quality through testing and best practices β€’ Collaborate with cross-functional teams on data architecture, schema design, and performance tuning β€’ Develop reusable components and frameworks to support analytics and machine learning workloads β€’ Work with Kafka, Hive, HDFS, and Delta Lake for data ingestion and storage β€’ Contribute to CI/CD and automation using Git, Jenkins, or Azure DevOps Required Skills & Experience: β€’ 10+ years of hands-on experience in Scala development (functional and object-oriented) β€’ 5+ years of experience with Apache Spark (core, SQL, structured streaming) β€’ Solid understanding of distributed data processing and storage systems (HDFS, Hive, Delta Lake) β€’ Experience with Kafka or other event streaming technologies β€’ Strong SQL and performance optimization skills β€’ Familiarity with cloud platforms such as Azure, AWS, or GCP (preferably with Databricks) β€’ Experience working in Agile/Scrum teams If I missed your call ! Please drop me a mail. Thank you, Harish Talent Acquisition Astir IT Solutions, Inc - An E-Verified Company Email:harishj@astirit.com Direct : 7326946000 β€’ 788 50 Cragwood Rd. Suite # 219, South Plainfield, NJ 07080 www.astirit.com