

Matlen Silver
Big Data Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with a contract length of "unknown", offering a pay rate of "unknown". Work is located in Jersey City, NJ, Addison, TX, or Charlotte, NC. Requires 8+ years of experience, strong skills in Spark, Redis, and Hadoop technologies.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Data Analysis #Data Storage #Redis #Shell Scripting #Data Pipeline #Python #Data Processing #AWS (Amazon Web Services) #Databases #Hadoop #Data Security #Bash #Big Data #Impala #R #Kafka (Apache Kafka) #Scripting #NoSQL #Security #ML (Machine Learning) #HDFS (Hadoop Distributed File System) #Java #HBase #Cloud #GCP (Google Cloud Platform) #YARN (Yet Another Resource Negotiator) #Kerberos #Programming #Spark (Apache Spark) #Storage #Azure #Data Integration #Scala #Tableau #MongoDB #Sqoop (Apache Sqoop)
Role description
Locations: Jersey City, NJ or Addison, TX or Charlotte, NC
Design and develop high-performance, scalable real-time data processing solutions using technologies like Kafka (KStreams, KTable), Spark Streaming, and Redis to handle massive data sets from multiple channels.
Implement and optimize streaming data pipelines to process and analyze near real-time data, ensuring low-latency and high-throughput performance.
Leverage expertise in Hadoop stack technologies, including HDFS, Spark, MapReduce, Yarn, Hive, Sqoop, Impala, and Hue, to design and optimize data processing workflows.
Utilize NoSQL databases such as Redis to support real-time data storage and retrieval for mission-critical applications.
Collaborate with cross-functional teams to identify system bottlenecks, benchmark performance, and propose innovative solutions to enhance system efficiency.
Take ownership of defining Big Data and real-time data strategies and roadmaps for the Enterprise, aligning them with business objectives.
Stay abreast of emerging technologies and industry trends related to Big Data and real-time data processing, continuously evaluating new tools and frameworks for potential integration.
Provide guidance and mentorship to junior teammates, fostering a culture of technical excellence.
Primary Skill
Spark
Secondary Skill
Redis
Tertiary Skill
Hadoop
Required Qualifications
Bachelor's or Master's degree in Science or Engineering, or a related field.
Minimum of 8 years of industry experience, with at least 5 years focused on hands-on work in Big Data and real-time data processing technologies.
Strong expertise in Hadoop stack technologies, such as HDFS, Spark, Yarn, Hive, Sqoop, Impala, and Hue.
Proficiency in real-time streaming technologies like Kafka (KStreams, KTable) and Spark Streaming.
Strong knowledge of NoSQL databases like Redis, MongoDB, or HBase.
Proficiency in programming languages such as Python, Scala, Java, and Bash/Shell Scripting.
Excellent problem-solving abilities and the capability to deliver effective solutions for business-critical applications.
Strong understanding of distributed systems, data partitioning, and fault-tolerant architectures.
Desired Qualifications
Experience with additional real-time streaming technologies like Flink or Storm.
Familiarity with Cloud Technologies such as Azure, AWS, or GCP.
Working knowledge of machine learning algorithms, statistical analysis, and programming languages (Python or R) to conduct data analysis and develop predictive models.
Proficiency in Data Integration and Data Security within real-time and Big Data ecosystems, including knowledge of Kerberos.
Strong command of Visual Analytics Tools, with a focus on Tableau
Locations: Jersey City, NJ or Addison, TX or Charlotte, NC
Design and develop high-performance, scalable real-time data processing solutions using technologies like Kafka (KStreams, KTable), Spark Streaming, and Redis to handle massive data sets from multiple channels.
Implement and optimize streaming data pipelines to process and analyze near real-time data, ensuring low-latency and high-throughput performance.
Leverage expertise in Hadoop stack technologies, including HDFS, Spark, MapReduce, Yarn, Hive, Sqoop, Impala, and Hue, to design and optimize data processing workflows.
Utilize NoSQL databases such as Redis to support real-time data storage and retrieval for mission-critical applications.
Collaborate with cross-functional teams to identify system bottlenecks, benchmark performance, and propose innovative solutions to enhance system efficiency.
Take ownership of defining Big Data and real-time data strategies and roadmaps for the Enterprise, aligning them with business objectives.
Stay abreast of emerging technologies and industry trends related to Big Data and real-time data processing, continuously evaluating new tools and frameworks for potential integration.
Provide guidance and mentorship to junior teammates, fostering a culture of technical excellence.
Primary Skill
Spark
Secondary Skill
Redis
Tertiary Skill
Hadoop
Required Qualifications
Bachelor's or Master's degree in Science or Engineering, or a related field.
Minimum of 8 years of industry experience, with at least 5 years focused on hands-on work in Big Data and real-time data processing technologies.
Strong expertise in Hadoop stack technologies, such as HDFS, Spark, Yarn, Hive, Sqoop, Impala, and Hue.
Proficiency in real-time streaming technologies like Kafka (KStreams, KTable) and Spark Streaming.
Strong knowledge of NoSQL databases like Redis, MongoDB, or HBase.
Proficiency in programming languages such as Python, Scala, Java, and Bash/Shell Scripting.
Excellent problem-solving abilities and the capability to deliver effective solutions for business-critical applications.
Strong understanding of distributed systems, data partitioning, and fault-tolerant architectures.
Desired Qualifications
Experience with additional real-time streaming technologies like Flink or Storm.
Familiarity with Cloud Technologies such as Azure, AWS, or GCP.
Working knowledge of machine learning algorithms, statistical analysis, and programming languages (Python or R) to conduct data analysis and develop predictive models.
Proficiency in Data Integration and Data Security within real-time and Big Data ecosystems, including knowledge of Kerberos.
Strong command of Visual Analytics Tools, with a focus on Tableau






