Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 12-month contract (up to 24 months) in Chicago, IL/Addison, TX/Charlotte, NC, paying $58-$65/hr. Requires strong Hadoop/Big Data experience, programming skills in Python/Spark/Scala, and SQL proficiency. Banking industry experience preferred.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
520
🗓️ - Date discovered
April 23, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Charlotte, NC
🧠 - Skills detailed
#Data Engineering #Python #Scripting #Agile #Batch #HBase #Big Data #Scala #Impala #Kafka (Apache Kafka) #Programming #Hadoop #Version Control #Spark (Apache Spark) #SQL (Structured Query Language) #Data Pipeline #GIT #Sqoop (Apache Sqoop) #Shell Scripting #Java
Role description

Position Details:

Industry: Banking/IT

Job title: Data Engineer

Location: Chicago, IL/ Addison, TX/ Charlotte, NC

Duration: 12 months (extension up to 24 months)

Onsite: 100% (Mon - Fri)

Pay Range: $58-$65/hr

Mission:

   • Build new data pipelines, identifying existing data gaps, and providing automated solutions to deliver advanced analytical capabilities and enriched data to applications that are supporting the operations team.

   • Responsible for obtaining data from the System of Record and establishing real-time data feed to provide analysis in an automated fashion.

Day to Day Responsibilities:

   • Working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc.

   • Hands on programming experience in perhaps Java, Scala, Python, or Shell Scripting, to name a few

   • Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines

   • Strong experience with SQL and Data modelling

   • Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle

   • Experience using Source Code and Version Control systems like SVN, Git, etc.

   • Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components

   • Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets

   • Ability to comprehend customer requests and provide the correct solution

   • Strong analytical mind to help take on complicated problems

   • Desire to resolve issues and dive into potential issues

Must Haves:

   • Good Programming background

   • Strong Hadoop/ Big Data experience

   • Python, Spark/Scala, Kafka expertise

   • Strong knowledge in Spark and Kafka (does not need to be Sr. level but understand how it works and proven ability to deliver)

   • Sr. Level experience

Pluses:

   • Prior Banking experience

   • Strong communication