

Clevanoo LLC
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Charlotte, NC (hybrid) for a 12-month contract, offering a pay rate of "X". Requires 5+ years in Software Engineering, proficiency in Java/Spark or Scala/Spark, and experience with Hadoop Ecosystem and cloud deployment.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 26, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
North Carolina, United States
-
🧠 - Skills detailed
#Data Engineering #Cloud #Maven #S3 (Amazon Simple Storage Service) #Java #Scripting #GIT #Spark (Apache Spark) #GCP (Google Cloud Platform) #Bash #SQL (Structured Query Language) #Deployment #Spark SQL #Shell Scripting #Kafka (Apache Kafka) #Version Control #Scala #AWS (Amazon Web Services) #Hadoop #Big Data #Normalization #Data Modeling #Unix
Role description
Title: Senior Data Engineer
Location: Charlotte, NC -hybrid
Duration: 12-month contract
Required Qualifications: 5+ years of Software Engineering experience
• Experience on Java/Spark or Scala/Spark
• Experience using version control tools like GIT, CI/CD process and build tools like Gradle and Maven.
• Hands-on experience working with Hadoop Ecosystem and Big data technologies and stores like Hive, Kafka, S3 and Iceberg (good to have)
• Strong knowledge of Database concepts and UNIX bash scripting.
• Experience with Spark and SQL and Spark performance tuning knowledge, good to have - Data modeling, normalization
• Good to have experience on developing cloud native applications and cloud deployment ( AWS or GCP)
• Experience leading other engineers and designing solutions
• Experience abstracting and decomposing software into services
Java or Scala, Spark, SQL, Shell scripting
Title: Senior Data Engineer
Location: Charlotte, NC -hybrid
Duration: 12-month contract
Required Qualifications: 5+ years of Software Engineering experience
• Experience on Java/Spark or Scala/Spark
• Experience using version control tools like GIT, CI/CD process and build tools like Gradle and Maven.
• Hands-on experience working with Hadoop Ecosystem and Big data technologies and stores like Hive, Kafka, S3 and Iceberg (good to have)
• Strong knowledge of Database concepts and UNIX bash scripting.
• Experience with Spark and SQL and Spark performance tuning knowledge, good to have - Data modeling, normalization
• Good to have experience on developing cloud native applications and cloud deployment ( AWS or GCP)
• Experience leading other engineers and designing solutions
• Experience abstracting and decomposing software into services
Java or Scala, Spark, SQL, Shell scripting






