

Daman
Databricks Champion
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a long-term contract for a "Databricks Champion" to lead Databricks adoption and optimization remotely. Requires 7+ years in Data Engineering, 3+ years with Databricks, strong Spark expertise, and proficiency in cloud platforms like Azure, AWS, or GCP.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 30, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #PySpark #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Spark (Apache Spark) #Apache Spark #Databricks #Scala #Compliance #Delta Lake #Data Pipeline #Cloud #ADF (Azure Data Factory) #"ETL (Extract #Transform #Load)" #Security #Data Security #Azure #Data Engineering #Migration #Data Modeling #Data Architecture #Airflow
Role description
Job Title: Databricks Champion
Location: Remote
Engagement Type: Long-term Contract
Role Overview
We are seeking a highly skilled and experienced Databricks Champion to lead the adoption, optimization, and governance of Databricks-based data platforms across the organization. This role requires a blend of deep technical expertise, strategic thinking, and evangelism to drive best practices and maximize platform value.
The ideal candidate will act as a subject matter expert (SME) for Databricks, enabling engineering teams, influencing architecture decisions, and ensuring scalable, efficient, and secure data solutions.
Key Responsibilities
• Serve as the Databricks SME, guiding teams on architecture, design patterns, and implementation strategies
• Lead the design and optimization of data pipelines using Databricks (Delta Lake, Spark, etc.)
• Define and enforce best practices, standards, and governance for Databricks usage
• Collaborate with Data Engineers, Architects, and Business stakeholders to deliver scalable solutions
• Drive performance tuning and cost optimization of Databricks workloads
• Enable teams through training, mentorship, and knowledge sharing
• Evaluate and implement new Databricks features and capabilities
• Support data platform modernization initiatives and cloud migration efforts
• Ensure data security, compliance, and access controls are properly implemented
• Troubleshoot and resolve complex data and platform-related issues
Required Skills & Experience
• 7+ years of experience in Data Engineering / Data Architecture
• 3+ years of hands-on experience with Databricks
• Strong expertise in Apache Spark (PySpark/Scala)
• Experience with Delta Lake, Unity Catalog, and Databricks Workflows
• Proficiency in cloud platforms (Azure, AWS, or GCP)
• Strong understanding of data modeling, ETL/ELT frameworks, and distributed systems
• Experience with data pipeline orchestration tools (ADF, Airflow, etc.)
• Solid knowledge of SQL and performance optimization techniques
• Experience implementing CI/CD pipelines for data platforms
• Strong problem-solving and stakeholder communication skills
Job Title: Databricks Champion
Location: Remote
Engagement Type: Long-term Contract
Role Overview
We are seeking a highly skilled and experienced Databricks Champion to lead the adoption, optimization, and governance of Databricks-based data platforms across the organization. This role requires a blend of deep technical expertise, strategic thinking, and evangelism to drive best practices and maximize platform value.
The ideal candidate will act as a subject matter expert (SME) for Databricks, enabling engineering teams, influencing architecture decisions, and ensuring scalable, efficient, and secure data solutions.
Key Responsibilities
• Serve as the Databricks SME, guiding teams on architecture, design patterns, and implementation strategies
• Lead the design and optimization of data pipelines using Databricks (Delta Lake, Spark, etc.)
• Define and enforce best practices, standards, and governance for Databricks usage
• Collaborate with Data Engineers, Architects, and Business stakeholders to deliver scalable solutions
• Drive performance tuning and cost optimization of Databricks workloads
• Enable teams through training, mentorship, and knowledge sharing
• Evaluate and implement new Databricks features and capabilities
• Support data platform modernization initiatives and cloud migration efforts
• Ensure data security, compliance, and access controls are properly implemented
• Troubleshoot and resolve complex data and platform-related issues
Required Skills & Experience
• 7+ years of experience in Data Engineering / Data Architecture
• 3+ years of hands-on experience with Databricks
• Strong expertise in Apache Spark (PySpark/Scala)
• Experience with Delta Lake, Unity Catalog, and Databricks Workflows
• Proficiency in cloud platforms (Azure, AWS, or GCP)
• Strong understanding of data modeling, ETL/ELT frameworks, and distributed systems
• Experience with data pipeline orchestration tools (ADF, Airflow, etc.)
• Solid knowledge of SQL and performance optimization techniques
• Experience implementing CI/CD pipelines for data platforms
• Strong problem-solving and stakeholder communication skills






