

Bigdata Engineer (Scala) - Local to Mountain View CA
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Bigdata Engineer (Scala) in Mountain View, CA, with a contract length of "unknown" and a pay rate of "unknown." Requires 5+ years of data engineering experience, strong Java and Scala skills, and proficiency with AWS services.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 14, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Mountain View, CA
-
π§ - Skills detailed
#Databases #Spark (Apache Spark) #SQL (Structured Query Language) #Data Pipeline #Lambda (AWS Lambda) #GIT #Compliance #Grafana #Programming #Kubernetes #Monitoring #NoSQL #Data Engineering #Security #Apache Spark #Cloud #S3 (Amazon Simple Storage Service) #Jenkins #Docker #Data Quality #Datasets #Scala #Prometheus #AWS (Amazon Web Services) #Data Integration #Batch #Data Lake #Java #Redshift #Data Science #Logging
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role : BIgData engineer (Scala)
Location : Mountain View CA (100% day1 onsite)
Responsibilities:
β’ Design and develop scalable data pipelines using Apache Spark, Flink, and Scala
β’ Build. and maintain data integration solutions across various data sources using AWS services.
β’ Develop efficient, reusable, and reliable code in Java and Scala.
β’ Implement real-time stream processing and batch processing architectures.
β’ Collaborate with data scientists, architects, and other engineers to develop end-to-end data solutions.
β’ Monitor and optimize performance of data workflows and job executions.
β’ Ensure data quality, security, and compliance throughout the lifecycle.
β’ Troubleshoot and resolve data-related technical issues.
Required Skills & Qualifications:
β’ 5+ years of professional experience in data engineering or backend software development.
β’ Strong programming skills in Java and Scala.
β’ Hands-on experience with Apache Spark and Apache Flink for batch and stream processing.
β’ Solid experience working with AWS services such as S3, EMR, Lambda, Kinesis, Glue, and Redshift.
β’ Proficiency in designing data models and working with large datasets.
β’ Familiarity with CI/CD practices and tools like Git, Jenkins, or similar.
β’ Strong understanding of distributed systems and cloud-native design patterns.
Preferred Qualifications:
β’ Experience with containerization tools such as Docker and orchestration using Kubernetes.
β’ Familiarity with data lake architecture and modern data stack concepts.
β’ Knowledge of SQL and NoSQL databases.
β’ Experience with monitoring and logging tools like Prometheus, Grafana, or CloudWatch.