

Databricks Architect
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Architect with a contract length of "unknown," offering a pay rate of "$X/hour." Required skills include Databricks, Apache Spark, Python, SQL, cloud platforms, and data security. Preferred certifications: Databricks Certified Data Engineer or Architect.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
San Francisco Bay Area
-
π§ - Skills detailed
#AI (Artificial Intelligence) #Consulting #Tableau #ADF (Azure Data Factory) #SQL (Structured Query Language) #PySpark #Storage #Spark (Apache Spark) #Data Lake #Scala #Data Architecture #Data Orchestration #AWS (Amazon Web Services) #Data Lakehouse #MLflow #Data Governance #IAM (Identity and Access Management) #Leadership #Python #Databricks #Data Security #ML (Machine Learning) #Security #Terraform #Data Catalog #Cloud #Data Science #Apache Spark #Logging #ADLS (Azure Data Lake Storage) #DevOps #Airflow #Azure #BI (Business Intelligence) #GCP (Google Cloud Platform) #Infrastructure as Code (IaC) #Delta Lake #"ETL (Extract #Transform #Load)" #Data Modeling #Microsoft Power BI #Data Engineering #S3 (Amazon Simple Storage Service)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Key Responsibilities
β’ Design and implement end-to-end data architecture on Databricks (Spark, Delta Lake, MLflow).
β’ Develop and optimize large-scale ETL/ELT pipelines using PySpark/SQL.
β’ Architect data lakes and lakehouses integrating cloud storage systems (e.g., ADLS, S3, GCS).
β’ Define and enforce best practices around security, data governance, and cost optimization.
β’ Lead technical workshops and collaborate with data engineers, data scientists, and DevOps.
β’ Implement data orchestration workflows (e.g., Databricks Workflows, Airflow, ADF).
β’ Integrate Databricks with third-party BI tools (Power BI, Tableau) and data catalogs (Unity Catalog, Hive Metastore).
β’ Provide architectural recommendations during pre-sales or stakeholder discussions (if consulting).
Required Skills and Qualifications
β’ Strong experience with Databricks platform and Apache Spark.
β’ Deep knowledge of Delta Lake, Unity Catalog, and Databricks Workflows.
β’ Proficient in Python, SQL, and optionally Scala.
β’ Hands-on with cloud platforms: Azure, AWS, or Google Cloud Platform.
β’ Experience in data security, IAM, RBAC, and audit logging.
β’ Proven expertise in designing data lakehouses, ETL/ELT workflows, and data modeling.
β’ Experience with CI/CD and infrastructure as code (Terraform, GitOps).
β’ Strong understanding of performance tuning and cost management.
β’ Excellent communication and leadership skills.
Preferred Qualifications
β’ Databricks Certified Data Engineer Professional or Architect-level certification.
β’ Cloud certifications (e.g., Azure Solutions Architect, AWS Certified Data Analytics).
β’ Experience with ML/AI on Databricks (MLflow, Feature Store, AutoML).
β’ Exposure to data mesh, data products, or modern data stack concepts.