

Databricks Developer - Java Spark
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Developer - Java Spark, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include Java 8+, Apache Spark, ETL, and big data experience. A Bachelor's degree and 8+ years of relevant experience are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 15, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Prometheus #Monitoring #Apache Spark #Scrum #Security #YARN (Yet Another Resource Negotiator) #Maven #Scala #"ETL (Extract #Transform #Load)" #Apache Iceberg #SQL (Structured Query Language) #HDFS (Hadoop Distributed File System) #API (Application Programming Interface) #Data Modeling #S3 (Amazon Simple Storage Service) #Agile #JUnit #Programming #Azure #Deployment #Data Security #HBase #Spark (Apache Spark) #Unit Testing #GIT #Delta Lake #Version Control #TestNG #Batch #Documentation #Computer Science #Grafana #Data Cleansing #Compliance #DevOps #Spark SQL #Jenkins #Data Processing #Regression #Big Data #Java #Python #Kafka (Apache Kafka) #Datadog #Data Pipeline #Kubernetes #Databricks #GitHub #Lambda (AWS Lambda) #Data Lake
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Task Description:
The Databricks Developer will be responsible for designing, developing, and maintaining scalable data processing solutions on the Databricks platform, with a focus on integrating and transforming .
Required skills/Level of Experience:
We are seeking a Databricks Developer with deep expertise in Java and Apache Spark, along. The ideal candidate will be responsible for designing, developing, and optimizing big data pipelines and analytics solutions on the Databricks platform. This role requires a deep understanding of distributed data processing, performance tuning, and scalable architecture.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks
β’ Implement data processing logic in Java 8+, leveraging functional programming and OOP best practices
β’ Optimize Spark jobs for performance, reliability, and cost-efficiency
β’ Collaborate with cross-functional teams to gather requirements and deliver data solutions
β’ Ensure compliance with data security, privacy, and governance standards
β’ Troubleshoot and debug production issues in distributed data environments
Required Skills & Qualifications:
β’ Bachelorβs degree in Computer Science, Information Systems, or a related field.
β’ 8+ years of professional experience demonstrating the required technical skills and responsibilities listed:
Programming Language Proficiency
β’ Strong expertise in Java 8 or higher
β’ Experience with functional programming (Streams API, Lambdas)
β’ Familiarity with object-oriented design patterns and best practices
Apache Spark
β’ Proficient in Spark Core, Spark SQL, and DataFrame/Dataset APIs
β’ Understanding of RDDs and when to use them
β’ Experience with Spark Streaming or Structured Streaming
β’ Skilled in performance tuning and Spark job optimization
β’ Ability to use Spark UI for troubleshooting stages and tasks
Big Data Ecosystem
β’ Familiarity with HDFS, Hive, or HBase
β’ Experience integrating with Kafka, S3, or Azure Data Lake
β’ Comfort with Parquet, Avro, or ORC file formats
Data Processing and ETL
β’ Strong understanding of batch and real-time data processing paradigms
β’ Experience building ETL pipelines with Spark
β’ Proficient in data cleansing, transformation, and enrichment
DevOps / Deployment
β’ Experience with YARN, Kubernetes, or EMR for Spark deployment
β’ Familiarity with CI/CD tools like Jenkins or GitHub Actions
β’ Monitoring experience with Grafana, Prometheus, Datadog, or Spark UI logs
Version Control & Build Tools
β’ Proficient in Git
β’ Experience with Maven or Gradle
Testing
β’ Unit testing with JUnit or TestNG
β’ Experience with Mockito or similar mocking frameworks
β’ Data validation and regression testing for Spark jobs
Soft Skills / Engineering Practices
β’ Experience working in Agile/Scrum environments
β’ Strong documentation skills (Markdown, Confluence, etc.)
β’ Ability to debug and troubleshoot production issues effectively
Preferred Qualifications:
β’ Experience with Scala or Python in Spark environments
β’ Familiarity with Databricks or Google DataProc
β’ Knowledge of Delta Lake or Apache Iceberg
β’ Experience with data modeling and performance design for big data systems
Nice to have skills:
β’ Experience with Scala or Python in Spark environments
β’ Familiarity with Databricks or Google DataProc
β’ Knowledge of Delta Lake or Apache Iceberg
β’ Data modeling and performance design for big data systems.