

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 6-9 years of experience in Scala and Spark. It is a long-term contract based in Mount Laurel, NJ, requiring expertise in Hadoop, CI/CD tools, and agile environments.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 19, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New Jersey, United States
-
π§ - Skills detailed
#Big Data #Spark SQL #Jira #Libraries #HDFS (Hadoop Distributed File System) #Scala #Data Engineering #BitBucket #Data Processing #Apache Spark #Spark (Apache Spark) #Jenkins #Agile #GIT #Distributed Computing #Hadoop #SQL (Structured Query Language)
Role description
Role: Big data engineer (Scala with Spark (Hadoop)
Location: Mount Laurel, NJ (Day 1 Onsite)
Long term Contract
Job Description:
Scala + Spark
1. Experience level β 6 to 9 years
1. Experience with Apache Spark / Scala, Spark SQL, and related Spark ecosystem tools and libraries.
1. Hands-on development building spark applications using Scala.
4..Knowledge of Big data technologies such as Hadoop, HDFS, distributed computing frameworks for large-scale data processing.
5.Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
6.Knowledge or experience in the use of GIT/BitBucket, Gradle,Jenkins, Jira, Confluence or a similar tool(s) for building Continuous Integration/Continuous Delivery (CI/CD) pipelines.
7.Technical working experience in an agile environment.
Role: Big data engineer (Scala with Spark (Hadoop)
Location: Mount Laurel, NJ (Day 1 Onsite)
Long term Contract
Job Description:
Scala + Spark
1. Experience level β 6 to 9 years
1. Experience with Apache Spark / Scala, Spark SQL, and related Spark ecosystem tools and libraries.
1. Hands-on development building spark applications using Scala.
4..Knowledge of Big data technologies such as Hadoop, HDFS, distributed computing frameworks for large-scale data processing.
5.Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
6.Knowledge or experience in the use of GIT/BitBucket, Gradle,Jenkins, Jira, Confluence or a similar tool(s) for building Continuous Integration/Continuous Delivery (CI/CD) pipelines.
7.Technical working experience in an agile environment.