Matlen Silver

Hadoop Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Hadoop Data Engineer" in Charlotte, NC, with a 1-year contract at a competitive pay rate. Requires 5+ years of experience in Big Data, expertise in Hadoop/Spark, real-time processing (Kafka), and NoSQL databases (Redis).
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 9, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Agile #Shell Scripting #Redis #Impala #MongoDB #Data Storage #Databases #Hadoop #Bash #Spark (Apache Spark) #Data Processing #Java #HBase #Python #Big Data #YARN (Yet Another Resource Negotiator) #Sqoop (Apache Sqoop) #Scala #HDFS (Hadoop Distributed File System) #Kafka (Apache Kafka) #Scrum #Data Pipeline #Programming #Storage #Scripting #Data Engineering #NoSQL
Role description
Job Title: Data Engineer Location: Charlotte, NC - Local Candidates Only - 3 days onsite per week required Duration: 1 Year Contract We are actively seeking a talented and motivated Data Engineer/ Feature Lead to join our dynamic and energetic team. As a key contributor to our agile scrum teams, you will collaborate closely with the Customer Insights team. We are looking for a candidate who can showcase strong technical expertise in Hadoop/Spark and related technologies, as well as real-time data processing technologies like Kafka, Spark Streaming, and NoSQL databases such as Redis. The ideal candidate excels at collaborating with both onshore and offshore team members. While functioning as an individual contributor for one or more teams, the Senior Data Engineer will also have the opportunity to lead and take responsibility for end-to-end solution design and delivery, based on the scale of implementation and required skillsets. Responsibilities: Design and develop high-performance, scalable real-time data processing solutions using technologies like Kafka (KStreams, KTable), Spark Streaming, and Redis to handle massive data sets from multiple channels. Implement and optimize streaming data pipelines to process and analyze near real-time data, ensuring low-latency and high-throughput performance. Leverage expertise in Hadoop stack technologies, including HDFS, Spark, MapReduce, Yarn, Hive, Sqoop, Impala, and Hue, to design and optimize data processing workflows. Utilize NoSQL databases such as Redis to support real-time data storage and retrieval for mission-critical applications. Collaborate with cross-functional teams to identify system bottlenecks, benchmark performance, and propose innovative solutions to enhance system efficiency. Take ownership of defining Big Data and real-time data strategies and roadmaps for the Enterprise, aligning them with business objectives. Stay abreast of emerging technologies and industry trends related to Big Data and real-time data processing, continuously evaluating new tools and frameworks for potential integration. Provide guidance and mentorship to junior teammates, fostering a culture of technical excellence. Required Qualifications Bachelor's or Master's degree in Science or Engineering, or a related field. Minimum of 5 years of industry experience, with at least 3 years focused on hands-on work in Big Data and real-time data processing technologies. Strong expertise in Hadoop stack technologies, such as HDFS, Spark, Yarn, Hive, Sqoop, Impala, and Hue. Proficiency in real-time streaming technologies like Kafka (KStreams, KTable) and Spark Streaming. Strong knowledge of NoSQL databases like Redis, MongoDB, or HBase. Proficiency in programming languages such as Python, Scala, Java, and Bash/Shell Scripting. Excellent problem-solving abilities and the capability to deliver effective solutions for business-critical applications. Strong understanding of distributed systems, data partitioning, and fault-tolerant architectures.