

Data Engineer
Position Details:
Industry: Banking/IT
Job title: Data Engineer
Location: Chicago, IL/ Addison, TX/ Charlotte, NC
Duration: 12 months (extension up to 24 months)
Onsite: 100% (Mon - Fri)
Pay Range: $58-$65/hr
Mission:
• Build new data pipelines, identifying existing data gaps, and providing automated solutions to deliver advanced analytical capabilities and enriched data to applications that are supporting the operations team.
• Responsible for obtaining data from the System of Record and establishing real-time data feed to provide analysis in an automated fashion.
Day to Day Responsibilities:
• Working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc.
• Hands on programming experience in perhaps Java, Scala, Python, or Shell Scripting, to name a few
• Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines
• Strong experience with SQL and Data modelling
• Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
• Experience using Source Code and Version Control systems like SVN, Git, etc.
• Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components
• Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets
• Ability to comprehend customer requests and provide the correct solution
• Strong analytical mind to help take on complicated problems
• Desire to resolve issues and dive into potential issues
Must Haves:
• Good Programming background
• Strong Hadoop/ Big Data experience
• Python, Spark/Scala, Kafka expertise
• Strong knowledge in Spark and Kafka (does not need to be Sr. level but understand how it works and proven ability to deliver)
• Sr. Level experience
Pluses:
• Prior Banking experience
• Strong communication
Position Details:
Industry: Banking/IT
Job title: Data Engineer
Location: Chicago, IL/ Addison, TX/ Charlotte, NC
Duration: 12 months (extension up to 24 months)
Onsite: 100% (Mon - Fri)
Pay Range: $58-$65/hr
Mission:
• Build new data pipelines, identifying existing data gaps, and providing automated solutions to deliver advanced analytical capabilities and enriched data to applications that are supporting the operations team.
• Responsible for obtaining data from the System of Record and establishing real-time data feed to provide analysis in an automated fashion.
Day to Day Responsibilities:
• Working experience on tools like Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc.
• Hands on programming experience in perhaps Java, Scala, Python, or Shell Scripting, to name a few
• Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines
• Strong experience with SQL and Data modelling
• Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
• Experience using Source Code and Version Control systems like SVN, Git, etc.
• Deep understanding of the Hadoop ecosystem and strong conceptual knowledge in Hadoop architecture components
• Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets
• Ability to comprehend customer requests and provide the correct solution
• Strong analytical mind to help take on complicated problems
• Desire to resolve issues and dive into potential issues
Must Haves:
• Good Programming background
• Strong Hadoop/ Big Data experience
• Python, Spark/Scala, Kafka expertise
• Strong knowledge in Spark and Kafka (does not need to be Sr. level but understand how it works and proven ability to deliver)
• Sr. Level experience
Pluses:
• Prior Banking experience
• Strong communication