

E-IT
Hadoop Data Engineer - 12+ Years
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop Data Engineer with 8–12 years of experience, based in Scottsdale, AZ. It is a contract position requiring expertise in Java, Spring Boot, Spark, Scala, Hive, SQL, and Kafka.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 24, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Scottsdale, AZ
-
🧠 - Skills detailed
#Data Integration #Big Data #Data Pipeline #Data Engineering #GIT #Scala #SQL (Structured Query Language) #Web Services #Apache Spark #Maven #Hadoop #Java #Batch #Version Control #DevOps #Spring Boot #Spark (Apache Spark) #Data Processing #GitLab #Kafka (Apache Kafka) #Data Architecture
Role description
Role : Hadoop Data Engineer
Location : Scottsdale AZ (100%)
Hire Type : Contract
Key Responsibilities
· Design, develop, and maintain scalable data processing applications using Java (8 or above) and Spring Boot.
· Build and optimize big data pipelines using Spark and Scala for large-scale data processing.
· Develop and consume RESTful web services for data integration and platform interoperability.
· Implement and manage batch and streaming data pipelines using Spark Streaming and Kafka.
· Write optimized Hive queries and SQL, focusing on performance and scalability.
· Work on distributed data platforms leveraging Hadoop ecosystem components.
· Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
· Ensure code quality through reviews, version control, and CI/CD best practices.
· Troubleshoot and resolve performance, scalability, and data reliability issues.
Required Skills & Qualifications
· 8–12 years of hands-on experience in Java (version 8 or above).
· Strong expertise in Spring Boot framework.
· Excellent proficiency in Apache Spark and Scala.
· Strong experience with Hive, SQL optimization, Hadoop, and Kafka.
· Solid understanding of RESTful web services.
· Experience with version control systems such as Git / GitLab.
· Hands-on experience with build tools like Maven and/or Gradle.
· Strong understanding of distributed systems and big data architecture.
· Experience with streaming frameworks such as Spark Streaming and Kafka Streams.
· Experience working in large-scale enterprise data platforms.
· Exposure to performance tuning and capacity planning for big data systems.
· Knowledge of DevOps or CI/CD pipelines is a plus.
Role : Hadoop Data Engineer
Location : Scottsdale AZ (100%)
Hire Type : Contract
Key Responsibilities
· Design, develop, and maintain scalable data processing applications using Java (8 or above) and Spring Boot.
· Build and optimize big data pipelines using Spark and Scala for large-scale data processing.
· Develop and consume RESTful web services for data integration and platform interoperability.
· Implement and manage batch and streaming data pipelines using Spark Streaming and Kafka.
· Write optimized Hive queries and SQL, focusing on performance and scalability.
· Work on distributed data platforms leveraging Hadoop ecosystem components.
· Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
· Ensure code quality through reviews, version control, and CI/CD best practices.
· Troubleshoot and resolve performance, scalability, and data reliability issues.
Required Skills & Qualifications
· 8–12 years of hands-on experience in Java (version 8 or above).
· Strong expertise in Spring Boot framework.
· Excellent proficiency in Apache Spark and Scala.
· Strong experience with Hive, SQL optimization, Hadoop, and Kafka.
· Solid understanding of RESTful web services.
· Experience with version control systems such as Git / GitLab.
· Hands-on experience with build tools like Maven and/or Gradle.
· Strong understanding of distributed systems and big data architecture.
· Experience with streaming frameworks such as Spark Streaming and Kafka Streams.
· Experience working in large-scale enterprise data platforms.
· Exposure to performance tuning and capacity planning for big data systems.
· Knowledge of DevOps or CI/CD pipelines is a plus.






