

E-IT
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Scottsdale, AZ, on a contract basis. Requires 10-12 years of experience, strong skills in Scala, Spark, Hive SQL, Hadoop, and Kafka, with expertise in big data architecture and real-time data streaming.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Scottsdale, AZ
-
🧠 - Skills detailed
#Cloud #Kafka (Apache Kafka) #Data Governance #Spark (Apache Spark) #Big Data #Data Architecture #Batch #Cloudera #Scala #"ETL (Extract #Transform #Load)" #Java #Data Quality #Data Engineering #Hadoop #SQL (Structured Query Language)
Role description
Role : Data Engineer
Location : Scottsdale AZ (100% Onsite)
Contract
Must have :
• 10-12 years of experience
• Strong Experience in Scala, Spark, hive SQL, Hadoop and Kafka
• Proficiency in Hive and SQL optimization.
• Understanding of distributed systems and big data architecture.
• Knowledge of streaming frameworks (Spark Streaming, Kafka Streams).
• Good to have – Aerospike experience
Skills required:
1. Experience : 6-9 Years
1. Must have Primary skills required Cloudera (Hadoop), Spark + Scala or Spark + Java and SQL
1. The resources should also have good understanding of Hive, Aerospike.
1. The resources should have strong analytical skills.
1. Should have worked on large scale ETL and DW projects and pipelines.
1. Real time data streaming experience and batch orchestration, data quality and reconciliation, understanding of concepts like Data Governance is a must.
1. Strong communication skills and ability to independently work and troubleshoot problems and come up with solutions.
Role : Data Engineer
Location : Scottsdale AZ (100% Onsite)
Contract
Must have :
• 10-12 years of experience
• Strong Experience in Scala, Spark, hive SQL, Hadoop and Kafka
• Proficiency in Hive and SQL optimization.
• Understanding of distributed systems and big data architecture.
• Knowledge of streaming frameworks (Spark Streaming, Kafka Streams).
• Good to have – Aerospike experience
Skills required:
1. Experience : 6-9 Years
1. Must have Primary skills required Cloudera (Hadoop), Spark + Scala or Spark + Java and SQL
1. The resources should also have good understanding of Hive, Aerospike.
1. The resources should have strong analytical skills.
1. Should have worked on large scale ETL and DW projects and pipelines.
1. Real time data streaming experience and batch orchestration, data quality and reconciliation, understanding of concepts like Data Governance is a must.
1. Strong communication skills and ability to independently work and troubleshoot problems and come up with solutions.






