Databricks Architect (Azure Cloud)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Architect (Azure Cloud) on a contract basis, offering remote work at a competitive pay rate. Requires 8+ years in data engineering, 3+ years with Azure Databricks, and strong skills in Apache Spark and data governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
June 4, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Warren, NJ
-
🧠 - Skills detailed
#Azure #Security #ADLS (Azure Data Lake Storage) #Compliance #BI (Business Intelligence) #Azure ADLS (Azure Data Lake Storage) #Spark (Apache Spark) #Data Quality #Data Engineering #Delta Lake #Strategy #ADF (Azure Data Factory) #Microsoft Power BI #Scala #Apache Spark #Data Ingestion #Data Processing #Synapse #Computer Science #"ETL (Extract #Transform #Load)" #Storage #Azure DevOps #Azure cloud #Cloud #Data Architecture #PySpark #Data Lakehouse #Kafka (Apache Kafka) #MLflow #Data Modeling #Azure Databricks #Data Science #Azure Data Factory #Leadership #Data Governance #Data Lake #Data Security #Databricks #DevOps
Role description
Job Title: Databricks Architect (Azure Cloud) Location:  Remote Job Type: Contract Job Summary: We are seeking a highly skilled Databricks Architect with deep expertise in designing and implementing data lakehouses and data processing pipelines on Azure Cloud using Azure Databricks. The ideal candidate will lead the architectural strategy, framework, and delivery of scalable data solutions that support our data-driven initiatives across the organization. Key Responsibilities: l Architect and design scalable data solutions using Azure Databricks, Delta Lake, and Spark. l Lead the development of data ingestion, transformation, and orchestration pipelines across structured and unstructured data. l Collaborate with data engineers, data scientists, DevOps, and business stakeholders to ensure solution alignment with business objectives. l Define best practices and enforce data architecture standards across multiple teams. l Optimize Databricks clusters for cost, performance, and scalability. l Integrate Databricks with other Azure services like Azure Data Lake Storage (ADLS), Azure Synapse, Azure Data Factory (ADF), Azure DevOps, and Power BI. l Implement robust data security, governance, and compliance practices, including role-based access control (RBAC), data masking, and auditing. l Evaluate and recommend tools, technologies, and processes to ensure the highest quality data solutions. l Monitor platform usage and provide recommendations for optimization. Required Qualifications: l Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. l 8+ years of experience in data engineering, data architecture, or similar roles. l 3+ years of hands-on experience architecting and implementing solutions on Azure Databricks. l Strong experience with Apache Spark, Delta Lake, and PySpark/Scala. l Proficiency with Azure Data Lake Storage Gen2, Azure Synapse, and ADF. l Strong understanding of Lakehouse architecture and modern data platform patterns. l Experience with CI/CD pipelines using Azure DevOps. l Deep knowledge of data modeling, performance tuning, and optimization in large-scale environments. l Familiarity with data governance, data quality, and security frameworks on Azure. Preferred Qualifications: l Azure certifications such as Azure Data Engineer Associate or Azure Solutions Architect Expert. l Experience with MLflow, Unity Catalog, and Databricks Workflows. l Knowledge of streaming data technologies (e.g., Kafka, Event Hubs, Structured Streaming). Strong leadership and communication skills.