Scala with Spark

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Scala with Spark developer with 6-9 years of experience, focusing on Apache Spark, Scala, and big data technologies. Key skills include CI/CD tools and agile methodologies. Contract length and pay rate are unspecified.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 4, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Mount Laurel, NJ
-
🧠 - Skills detailed
#BitBucket #Jenkins #Big Data #Jira #GIT #Libraries #Hadoop #Distributed Computing #Scala #Apache Spark #Spark (Apache Spark) #Agile #SQL (Structured Query Language) #Data Processing #HDFS (Hadoop Distributed File System) #Spark SQL
Role description
Scala + Spark 1. Experience level – 6 to 9 years 1. Experience with Apache Spark / Scala, Spark SQL, and related Spark ecosystem tools and libraries. 1. Hands-on development building spark applications using Scala 4.Knowledge of Big data technologies such as Hadoop, HDFS, distributed computing frameworks for large-scale data processing. 5.Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. 6.Knowledge or experience in the use of GIT/BitBucket, Gradle, Jenkins, Jira, Confluence or a similar tool(s) for building Continuous Integration/Continuous Delivery (CI/CD) pipelines. 7.Technical working experience in an agile environment.