

Big Data Engineer with Java Background | W-2 Only | 8-12 Years of Experience Required
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with a Java background, requiring 8-12 years of experience. The hybrid contract position in Whippany, NJ pays W-2 only. Key skills include Java, Hadoop ecosystem, and data governance expertise.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 26, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Hanover, NJ
-
π§ - Skills detailed
#HDFS (Hadoop Distributed File System) #Security #GCP (Google Cloud Platform) #Spark (Apache Spark) #HBase #Data Processing #Batch #Hadoop #Cloudera #Oracle #AWS (Amazon Web Services) #Big Data #Data Management #YARN (Yet Another Resource Negotiator) #Kubernetes #GIT #Jenkins #Databases #Cloud #Monitoring #Data Lake #SQL (Structured Query Language) #Azure #Version Control #Elasticsearch #Docker #NoSQL #Linux #Scala #Kafka (Apache Kafka) #Computer Science #Programming #Java #Data Governance #Data Engineering #Impala #Sqoop (Apache Sqoop) #Data Security #Storage
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Clarkstech, is seeking the following. Apply via Dice today!
Location: Whippany, NJ Position Type: Contract / Full-Time (Hybrid) Work Mode: 2-3 days onsite per week
What You Ll Do
β’ Build, deploy, and maintain large-scale distributed data processing systems and pipelines to support enterprise data needs.
β’ Design and lead end-to-end Big Data solutions: from ingestion, storage, processing to access; ensure scalability, reliability, and performance.
β’ Develop batch and streaming workflows using Spark, Hadoop, Kafka, HBase, Hive, Impala, and NoSQL databases.
β’ Monitor system health, analyze performance metrics, troubleshoot issues (including Cloudera/Hadoop logs), and effect cluster configuration and performance optimizations.
β’ Use AutoSys for job scheduling and monitoring; ensure jobs run reliably, manage failures, and automate recovery where possible.
β’ Enforce and implement data governance, data security, and data management best practices across the Big Data platform.
What You Bring
β’ Bachelor s or Master s degree in Computer Science, Engineering, or a related field.
β’ 8-10 years of software development/data engineering experience working with large-scale distributed systems.
β’ Minimum 4 years leading Big Data solution designs and implementations in enterprise environments.
β’ Strong programming skills in Java, J2EE, Scala.
β’ Deep hands-on experience with Hadoop ecosystem: Spark, HDFS, YARN, Hive, Impala, Sqoop, HBase; Kafka; NoSQL DBs.
β’ Solid experience with SQL, ElasticSearch, Oracle, and relational/non-relational data stores.
β’ Proficient in using AutoSys (or similar) for job scheduling and monitoring.
β’ Strong troubleshooting skills performance tuning, logs analysis, cluster issues.
β’ Knowledge of data governance and Hadoop/Cloudera security best practices.
β’ Working experience with Linux, version control (Git), and CI/CD toolchains (Jenkins etc.).
Ideal Skills
β’ Familiarity with cloud-based Big Data / data lake architectures (AWS, Azure, or Google Cloud Platform).
β’ Experience with containerization / orchestration (Docker, Kubernetes) is a plus.
β’ Ability to mentor & guide junior engineers.
β’ Excellent communication skills, strong attention to detail, and ability to work cross-functionally.
Engagement Rules
β’ Contract Position (W2 only) No C2C, No Agencies.
β’ This is a senior-level role requiring 8+ years of professional experience in data engineering and financial services.
β’ Candidates must have verifiable project experience in big data and Java development.
β’ H1-B transfer available for the right candidate.
β’ Multi-year contract with annual extensions.
β’ Hybrid onsite role (Whippany, NJ).
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Clarkstech, is seeking the following. Apply via Dice today!
Location: Whippany, NJ Position Type: Contract / Full-Time (Hybrid) Work Mode: 2-3 days onsite per week
What You Ll Do
β’ Build, deploy, and maintain large-scale distributed data processing systems and pipelines to support enterprise data needs.
β’ Design and lead end-to-end Big Data solutions: from ingestion, storage, processing to access; ensure scalability, reliability, and performance.
β’ Develop batch and streaming workflows using Spark, Hadoop, Kafka, HBase, Hive, Impala, and NoSQL databases.
β’ Monitor system health, analyze performance metrics, troubleshoot issues (including Cloudera/Hadoop logs), and effect cluster configuration and performance optimizations.
β’ Use AutoSys for job scheduling and monitoring; ensure jobs run reliably, manage failures, and automate recovery where possible.
β’ Enforce and implement data governance, data security, and data management best practices across the Big Data platform.
What You Bring
β’ Bachelor s or Master s degree in Computer Science, Engineering, or a related field.
β’ 8-10 years of software development/data engineering experience working with large-scale distributed systems.
β’ Minimum 4 years leading Big Data solution designs and implementations in enterprise environments.
β’ Strong programming skills in Java, J2EE, Scala.
β’ Deep hands-on experience with Hadoop ecosystem: Spark, HDFS, YARN, Hive, Impala, Sqoop, HBase; Kafka; NoSQL DBs.
β’ Solid experience with SQL, ElasticSearch, Oracle, and relational/non-relational data stores.
β’ Proficient in using AutoSys (or similar) for job scheduling and monitoring.
β’ Strong troubleshooting skills performance tuning, logs analysis, cluster issues.
β’ Knowledge of data governance and Hadoop/Cloudera security best practices.
β’ Working experience with Linux, version control (Git), and CI/CD toolchains (Jenkins etc.).
Ideal Skills
β’ Familiarity with cloud-based Big Data / data lake architectures (AWS, Azure, or Google Cloud Platform).
β’ Experience with containerization / orchestration (Docker, Kubernetes) is a plus.
β’ Ability to mentor & guide junior engineers.
β’ Excellent communication skills, strong attention to detail, and ability to work cross-functionally.
Engagement Rules
β’ Contract Position (W2 only) No C2C, No Agencies.
β’ This is a senior-level role requiring 8+ years of professional experience in data engineering and financial services.
β’ Candidates must have verifiable project experience in big data and Java development.
β’ H1-B transfer available for the right candidate.
β’ Multi-year contract with annual extensions.
β’ Hybrid onsite role (Whippany, NJ).