

JMD Technologies Inc.
Java with Big Data
β - Featured Role | Apply direct with Data Freelance Hub
This role is a contract position for a "Java with Big Data" specialist in Santa Clara, CA, requiring 10+ years of experience. Key skills include Java, Python, Docker, Kubernetes, and Big Data components. On-site work is mandatory.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 10, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Santa Clara, CA
-
π§ - Skills detailed
#Scripting #Programming #HDFS (Hadoop Distributed File System) #Big Data #Docker #Impala #Microservices #Redis #Spark (Apache Spark) #Shell Scripting #Airflow #Java #API (Application Programming Interface) #Kubernetes #Python
Role description
Position: Java with Big data
Location: Santa Clara, CA
Type: Contract
Exp.: 10+
Must to have:
1. Excellent hands-on experience with Java, Python, and shell scripting.
1. Strong scripting/programming skills
1. Strong background in Docker, containerization, and microservices architecture.
1. Expertise with Kubernetes internals (schedulers, controllers, API server, etc.).
1. Proficiency with Helm, Kustomize, and Kubernetes operators.
Good to have:
1. Experience support distributed TSDB, Redis, Airflow.
1. Experience with VictoriaMetrics is a plus.
Hands-on experience working with Big Data components (HDFS, Spark, Impala, Hive, or similar) is a plus
Position: Java with Big data
Location: Santa Clara, CA
Type: Contract
Exp.: 10+
Must to have:
1. Excellent hands-on experience with Java, Python, and shell scripting.
1. Strong scripting/programming skills
1. Strong background in Docker, containerization, and microservices architecture.
1. Expertise with Kubernetes internals (schedulers, controllers, API server, etc.).
1. Proficiency with Helm, Kustomize, and Kubernetes operators.
Good to have:
1. Experience support distributed TSDB, Redis, Airflow.
1. Experience with VictoriaMetrics is a plus.
Hands-on experience working with Big Data components (HDFS, Spark, Impala, Hive, or similar) is a plus






