Collabera

Hadoop Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop Data Engineer in the banking industry, with a 12-18 month contract and an hourly pay rate of $60-$65. Key skills include Hadoop, Spark, SQL, and experience with distributed systems. Onsite work is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
504
-
🗓️ - Date
May 14, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Impala #JSON (JavaScript Object Notation) #HDFS (Hadoop Distributed File System) #HBase #Spark SQL #Base #Cloud #Data Pipeline #Spark (Apache Spark) #Programming #REST API #PHP #MySQL #REST (Representational State Transfer) #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Databases #Cloudera #Apache Spark #Big Data #Data Profiling #Scala #Datasets #Kafka (Apache Kafka) #XML (eXtensible Markup Language) #Data Processing #Hadoop #Sqoop (Apache Sqoop) #Data Engineering #Batch
Role description
Job Title: Hadoop Data Engineer Location: Chicago, Denver, Jacksonville, Charlotte, Addison Work Arrangement: Onsite from Day 1 Client Industry: Banking Duration:12-18 Months (Possibility of Full-Time Conversion) About the Role: We are actively looking for an experienced Hadoop Data Engineer to join a high-performing enterprise data engineering team. The ideal candidate will have strong expertise in Big Data technologies, distributed systems, and building scalable batch and near real-time data pipelines. What We’re Looking For: • Strong hands-on experience with Hadoop and Big Data ecosystems • Expertise in Spark Structured Streaming and Apache Spark • Strong SQL skills with Hive, Impala, MySQL, or Spark SQL • Experience with Kafka, Sqoop, MapReduce, HDFS, HBase, SOLR • Experience working with Cloudera/Hortonworks platforms (CDP/HDP) • Knowledge of Elastic Search and Kibana is a plus • Strong programming experience in Scala, Python, or PHP • Experience working in distributed systems and large-scale data environments Responsibilities • Design, develop, and maintain batch and near real-time data pipelines using Spark Structured Streaming, MapReduce, and Hadoop technologies • Ingest data from multiple sources including Kafka/message queues, REST APIs, relational databases, and file systems • Transform, validate, and process large datasets using Hive, Impala, Spark SQL, and HDFS • Work with structured and semi-structured data formats such as JSON, CSV, and XML • Perform data profiling, validation, and troubleshooting for Spark applications and SQL jobs • Optimize data processing workflows and resolve performance bottlenecks in distributed environments Compensation: Hourly Rate: $60 - $65 per hour This range reflects base compensation and may vary based on location, market conditions, experience, and candidate qualifications. Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: medical insurance, dental insurance, vision insurance, 401(k) retirement plan, life insurance, long-term disability insurance, short-term disability insurance, paid parking/public transportation, (paid time, paid sick and safe time, hours of paid vacation time, weeks of paid parental leave, paid holidays annually - AS Applicable) About Us At Collabera, we don’t just offer jobs—we build careers. As a global leader in talent solutions, we provide opportunities to work with top organizations, cutting-edge technologies, and dynamic teams. Our culture thrives on innovation, collaboration, and a commitment to excellence. With continuous learning, career growth, and a people-first approach, we empower you to achieve your full potential. Join us and be part of a company that values passion, integrity, and making an impact. Ready to Apply? Apply now or reach out to mritunjay.kumar@collabera.com at 973 381 7213 for more information. We look forward to speaking with you!