Pacer Group

Databricks Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Databricks Developer in Pleasanton, CA, on a 24-month contract. Key skills include extensive Databricks and Apache Spark experience, data lakehouse architecture, SQL, and Unix scripting. Relevant certifications in Databricks or cloud platforms are preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
520
-
πŸ—“οΈ - Date
May 12, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Informatica #Storage #Monitoring #Data Lake #Datasets #Documentation #Data Governance #Data Processing #Business Analysis #Metadata #Data Pipeline #Spatial Data #Apache Spark #Data Science #Security #Scripting #Cloud #GCP (Google Cloud Platform) #Data Lineage #"ACID (Atomicity #Consistency #Isolation #Durability)" #Big Data #Spark (Apache Spark) #Data Integrity #Unix #Data Engineering #Azure #Databricks #Geospatial Analysis #Delta Lake #AWS (Amazon Web Services) #BI (Business Intelligence) #Shell Scripting #Data Quality #"ETL (Extract #Transform #Load)" #Scala #Data Lakehouse #Data Access #Compliance #SQL (Structured Query Language)
Role description
Note to Applicants: We are actively reviewing all submissions! Even if you see a high number of applicants, we encourage you to apply if your skills align with the role. We value every application and are committed to finding the right fit. Job Overview: We are seeking a highly proficient Databricks Developer to design, develop and manage scalable data pipelines and advanced analytics solutions. This role is instrumental in transforming complex, raw data into actionable business intelligence through the Databricks platform. The successful candidate will leverage their expertise in big data processing and cloud environments to optimize workflows, ensure data integrity, and drive high-performance computational efficiency across the organization’s data ecosystem. Job Details β€’ Job Title: Senior Databricks Developer β€’ Job Location: Pleasanton, CA β€’ Job Duration: Approximately 24 Months (June 1, 2026 – May 10, 2028) β€’ Job Type: Contract β€’ Industry Focus: Healthcare Core Competencies To be considered for this position, applicants must demonstrate mastery in the following: β€’ Databricks Ecosystem: Extensive experience designing and deploying notebooks and jobs within a Databricks environment. β€’ Apache Spark: Deep technical knowledge of Spark for large-scale data processing and ETL/ELT optimization. β€’ Data Lakehouse Architecture: Practical experience with Delta Lake for ACID transactions and scalable metadata handling. β€’ Technical Stack: Proficiency in SQL, Unix shell scripting, and Informatica. β€’ Identity & Security: Functional understanding of ADFS (Active Directory Federation Services) for secure data access. Key Responsibilities β€’ Pipeline Architecture: Design, develop, and maintain automated end-to-end data pipelines using Databricks and Apache Spark. β€’ Process Optimization: Refactor existing ETL/ELT workflows to improve data throughput and reduce cloud consumption costs. β€’ Integration Management: Seamlessly integrate Databricks with cloud storage solutions and enterprise data lakes to ensure a unified data source. β€’ Data Governance: Implement and enforce rigorous data quality standards, security protocols, and governance best practices in alignment with organizational policy. β€’ Stakeholder Collaboration: Partner with cross-functional teams, including data scientists and business analysts, to translate complex requirements into technical specifications. β€’ Production Support: Monitor and troubleshoot production workflows, performing root cause analysis on job failures to ensure maximum uptime. Technical Depth & Standards The candidate must adhere to the following technical and regulatory frameworks: β€’ Security Protocols: Ensure all data movement and storage comply with internal security benchmarks and ADFS authentication standards. β€’ Geospatial Analysis: Apply advanced analytical methods to geospatial datasets to support location-based business intelligence. β€’ Performance Tuning: Utilize Spark UI and Databricks monitoring tools to identify and resolve bottlenecks in partitioned datasets. β€’ Compliance: Maintain documentation for data lineage and transformation logic to ensure audit readiness and operational transparency. Qualifications β€’ Experience: Minimum of 6 to 10+ years of professional experience in data engineering or big data analytics, Apache Spark, Databricks . β€’ Education: Relevant certifications in Databricks or Cloud Platforms (Azure/AWS/GCP) are considered a strong asset.