Hadoop/Spark Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop/Spark Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Spark3, Python, Shell Scripting, and Hadoop Cloudera. Banking industry experience and 3+ years of development experience are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 4, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Pittsburgh, PA
-
🧠 - Skills detailed
#Jira #Python #Alation #Bash #Jenkins #GIT #Scala #Spark (Apache Spark) #Agile #ERWin #Shell Scripting #Data Engineering #Data Processing #Scripting #Cloud #Hadoop #Data Mining #Data Modeling #Data Governance #HDFS (Hadoop Distributed File System) #Cloudera
Role description
Our client has an immediate need for a Hadoop/Spark Data Engineer, who will be responsible for designing and implementing new applications, maintaining existing applications, and supporting RTB (Run the Bank) functions within a dynamic and fast-paced environment. Must Have Technical Skills & Experience: β€’ Spark3 β€’ Python β€’ Shell Scripting β€’ Hadoop Cloudera (Hadoop Admin, Hadoop Tables, HDFS file processing) β€’ Data and System Architecture experience β€’ Data Modeling using Erwin Tool β€’ Alation or similar data mining/reporting tools Flex Skills/Nice to Have: β€’ Agile methodology β€’ Jira β€’ Git Bash β€’ Jenkins β€’ uDeploy β€’ CA7 β€’ Banking industry experience preferred β€’ Minimum 3+ years of development experience Responsibilities: β€’ Design, develop, and implement new data applications and solutions β€’ Maintain and enhance existing applications to ensure performance, scalability, and reliability. β€’ Support RTB functions, ensuring data availability, quality, and timely processing β€’ Work with Hadoop Cloudera environments to manage Hadoop administration, HDFS file processing, and Hadoop Tables β€’ Develop, optimize, and maintain Spark3, Python, and Shell scripts to handle large-scale data processing β€’ Apply data and system architectural expertise to design scalable, efficient, and secure solutions β€’ Perform data modeling using Erwin Tool to design and maintain enterprise data models β€’ Utilize Alation or similar data mining/reporting tools to support data governance and analytics initiatives β€’ Collaborate with cross-functional teams to align technical solutions with business needs For a complete listing of all ConsultUSA jobs and the benefits of working for us, please visit www.consultusa.com