Databricks Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Architect, 100% remote, lasting 6+ months, with a pay rate of "$$$". Requires expertise in Databricks, Azure, Data Lake, PySpark, Unity Catalog, and Azure DevOps. Consulting experience in establishing frameworks and access control is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 23, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Azure Databricks #DevOps #GitHub #Consulting #Data Pipeline #PySpark #Data Lake #Azure #Databricks #Security #Spark (Apache Spark) #Observability #Data Engineering #Azure DevOps
Role description
Title: Databricks Architect (Only W2) Location: 100% Remote Duration: 6+ Months Person needs experience doing this from greenfield/scratch though Must have Skills: Databricks, Azure, Data lake, Pyspark, Unity catalog, Denodo or Rulex, Azure Devops, CI/CD. We are looking for a Databricks Platform Engineer who will focus on platform innovation, vs. data engineering. Someone who will help us define standards and frameworks for out Databricks practice, and that our Data Engineering group will use. This consultant would be building templates, creating notebooks (having an understanding how to take an organization from Bronze to Silver to Gold) and conducting observability reporting. The consultant should have a strong understanding of Unity Catalog as a lot of this work revolves around Unity Catalog. Experience β€’ Consulting Databricks Engineering role to establish frameworks, reports,, template notebooks, and other standards to be used by the Data Engineering team. β€’ This person will not be building data pipelines, but they will need to be delivering hands-on work in the form of reports and templates. β€’ Focus will be on establishing access control security & best-practices when setting up new Databricks pipelines. β€’ Seeking platform-level DB experience & advanced Data Engineering skills with both Azure Databricks & PySpark. β€’ Experience with Unity Catalog, as part of this role will be stitching together data in unity catalog. β€’ Ideal candidates will also have experience with Azure DevOps and GitHub for Repos, and CI /CD Boards. β€’ Experience with either Denodo or Rulex would be a major bonus.