E-Solutions

Big Data Engineer - Java (Local CA Only)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with a Java background, located in MTV/San Diego CA, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include Apache Flink, Java, Apache Kafka, and AWS cloud services.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 29, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Mountain View, CA
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #Programming #Observability #Scripting #Cloud #Unit Testing #Data Engineering #System Testing #Athena #Data Pipeline #Batch #Big Data #Shell Scripting #Apache Kafka #Databases #Java #Scala #Spark (Apache Spark) #AWS (Amazon Web Services)
Role description
β€’ Role: Big Data Engineer with Java Background β€’ Location: MTV/San Diego CA (100% Onsite) Must have: β€’ Expertise in Apache Flink or streaming exp β€’ Strong programming skills in Java β€’ Knowledge of stream processing and batch processing β€’ Experience working with Apache Kafka. β€’ Proven experience with projects in Apache Flink production Java or Scala β€’ Knowledge of relational databases. β€’ Knowledge of MPP like AWS Athena β€’ Knowledge of HIVE β€’ Experience of AWS cloud services What You'll Do: β€’ Work on writing new data pipelines. β€’ Debug and optimize existing data pipelines. β€’ Analyze pipelines consuming high resources or having high execution time and optimize as needed β€’ Implementation for automating pipeline management and metrics setup for observability β€’ Gather requirements, work on high level design, implement (code) and deliver efficient and scalable DW solutions in high data growth environment β€’ Oversee team activities related to coding, unit testing, system testing, resolve defect originating during system test and deploy the fixes for the defects when needed Good to Have β€’ Familiarity with big data technologies like Spark, Hive β€’ Familiarity with CI/CD and basic dev ops. β€’ Familiarity with shell scripting