

Redolent, Inc
Sr. Data Engineer – Java, Spark | Ex-WALMART Candidates Only (REMOTE)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer – Java, Spark, requiring Walmart experience. Contract length is unspecified, with a remote work location. Key skills include Java, Spark, GCP, and Unix/Linux. Experience in ETL development and compliance projects is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
October 8, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Sunnyvale, CA
-
🧠 - Skills detailed
#Cloud #Spark (Apache Spark) #Impala #Data Engineering #Big Data #"ETL (Extract #Transform #Load)" #Java #GCP (Google Cloud Platform) #Compliance #Linux #HBase #Scala #Databases #Hadoop #Unix #Data Lake
Role description
Key Skills (Must-Have):
• Strong experience with Java, Scala, Spark, Hive, and Hadoop
• Hands-on with Google Cloud Platform (GCP)
• Proficient in Unix/Linux and relational databases
Role Overview:
We are seeking a seasoned Data Engineer to support the development and optimization of big data solutions in a hybrid cloud environment. This role focuses on building scalable ETL pipelines, ensuring compliance, and enhancing data lake operations.
Responsibilities:
• Design and develop scalable ETL solutions using Spark and Java
• Optimize and maintain jobs in the Google Cloud environment
• Troubleshoot and debug performance issues in large data systems
• Support state-level compliance initiatives and customer data lake projects
Preferred Experience:
• Prior work with Walmart data platforms
• Experience in ETL development in cloud-based big data environments
• Background in compliance-related data projects
• Familiarity with tools like Drill, Impala, and HBase is a plus
Team Structure:
• Distributed team across the US and IDC
• Collaboration with cross-functional data and compliance teams
Key Skills (Must-Have):
• Strong experience with Java, Scala, Spark, Hive, and Hadoop
• Hands-on with Google Cloud Platform (GCP)
• Proficient in Unix/Linux and relational databases
Role Overview:
We are seeking a seasoned Data Engineer to support the development and optimization of big data solutions in a hybrid cloud environment. This role focuses on building scalable ETL pipelines, ensuring compliance, and enhancing data lake operations.
Responsibilities:
• Design and develop scalable ETL solutions using Spark and Java
• Optimize and maintain jobs in the Google Cloud environment
• Troubleshoot and debug performance issues in large data systems
• Support state-level compliance initiatives and customer data lake projects
Preferred Experience:
• Prior work with Walmart data platforms
• Experience in ETL development in cloud-based big data environments
• Background in compliance-related data projects
• Familiarity with tools like Drill, Impala, and HBase is a plus
Team Structure:
• Distributed team across the US and IDC
• Collaboration with cross-functional data and compliance teams