

Data Architect – Databricks
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect – Databricks in Santa Clara, CA, for 6+ months at a pay rate of "unknown." Requires 12+ years in data architecture, 5+ years in Databricks, strong AWS and Snowflake experience, and relevant certifications.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 19, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Santa Clara, CA
-
🧠 - Skills detailed
#Security #Snowflake #Spark (Apache Spark) #AI (Artificial Intelligence) #Data Lakehouse #Apache Airflow #Infrastructure as Code (IaC) #Data Quality #Tableau #Data Engineering #MLflow #BI (Business Intelligence) #Data Lake #Airflow #Apache Spark #AWS (Amazon Web Services) #Microsoft Power BI #Data Architecture #Data Ingestion #Compliance #dbt (data build tool) #Storage #Data Processing #Delta Lake #Batch #Scala #Databricks #ML (Machine Learning) #DevOps #Data Governance #Cloud #"ETL (Extract #Transform #Load)" #Migration #Terraform #PySpark
Role description
Job Title: Data Architect – Databricks
Location: Santa Clara, CA (Fully Onsite) (Local Candidates Only)
Duration: 6+ Months
Only USC/GC Preferred.
Note: Strong Experience in Databricks, AWS and Snowflake is required. Please look for Candidates who has worked on Long Term Projects.
About The Role
We’re seeking a visionary Data Architect with deep expertise in Databricks to lead the design, implementation, and optimization of our enterprise data architecture. You’ll be instrumental in shaping scalable data solutions that empower analytics, AI, and business intelligence across the organization.
If you thrive in a fast-paced environment, love solving complex data challenges, and have a passion for cloud-native platforms like AWS Databricks, we want to hear from you.
Key Responsibilities
• Design and implement robust, scalable, and secure data architectures using Databricks, Spark, Delta Lake, and cloud-native tools.
• Collaborate with data engineers, analysts, and business stakeholders to define data models, pipelines, and governance strategies.
• Develop and maintain data lakehouses, ensuring optimal performance and cost-efficiency.
• Define best practices for data ingestion, transformation, and storage using Databricks notebooks, jobs, and workflows.
• Architect solutions for real-time and batch data processing.
• Ensure data quality, lineage, and compliance with internal and external standards.
• Lead migration efforts from legacy systems to modern cloud-based data platforms.
• Mentor junior team members and evangelize data architecture principles across the organization.
Required Skills & Qualifications
• 12+ years of experience in data architecture, with 5+ years hands-on in Databricks.
• Strong Experience in Snowflake
• Experience in cloud platforms AWS, especially AWS Databricks.
• Strong proficiency in Apache Spark, Delta Lake, and PySpark.
• Experience with data modelling, ETL/ELT pipelines, and data warehousing.
• Familiarity with CI/CD, DevOps, and Infrastructure as Code (Terraform, ARM templates).
• Knowledge of data governance, security, and compliance frameworks.
• Excellent communication and stakeholder management skills.
Preferred Qualifications
• Databricks Certified Data Engineer or Architect.
• Experience with MLflow, Unity Catalog, and Lakehouse architecture.
• Background in machine learning, AI, or advanced analytics.
• Experience with tools like Apache Airflow, dbt, or Power BI/Tableau.
Job Title: Data Architect – Databricks
Location: Santa Clara, CA (Fully Onsite) (Local Candidates Only)
Duration: 6+ Months
Only USC/GC Preferred.
Note: Strong Experience in Databricks, AWS and Snowflake is required. Please look for Candidates who has worked on Long Term Projects.
About The Role
We’re seeking a visionary Data Architect with deep expertise in Databricks to lead the design, implementation, and optimization of our enterprise data architecture. You’ll be instrumental in shaping scalable data solutions that empower analytics, AI, and business intelligence across the organization.
If you thrive in a fast-paced environment, love solving complex data challenges, and have a passion for cloud-native platforms like AWS Databricks, we want to hear from you.
Key Responsibilities
• Design and implement robust, scalable, and secure data architectures using Databricks, Spark, Delta Lake, and cloud-native tools.
• Collaborate with data engineers, analysts, and business stakeholders to define data models, pipelines, and governance strategies.
• Develop and maintain data lakehouses, ensuring optimal performance and cost-efficiency.
• Define best practices for data ingestion, transformation, and storage using Databricks notebooks, jobs, and workflows.
• Architect solutions for real-time and batch data processing.
• Ensure data quality, lineage, and compliance with internal and external standards.
• Lead migration efforts from legacy systems to modern cloud-based data platforms.
• Mentor junior team members and evangelize data architecture principles across the organization.
Required Skills & Qualifications
• 12+ years of experience in data architecture, with 5+ years hands-on in Databricks.
• Strong Experience in Snowflake
• Experience in cloud platforms AWS, especially AWS Databricks.
• Strong proficiency in Apache Spark, Delta Lake, and PySpark.
• Experience with data modelling, ETL/ELT pipelines, and data warehousing.
• Familiarity with CI/CD, DevOps, and Infrastructure as Code (Terraform, ARM templates).
• Knowledge of data governance, security, and compliance frameworks.
• Excellent communication and stakeholder management skills.
Preferred Qualifications
• Databricks Certified Data Engineer or Architect.
• Experience with MLflow, Unity Catalog, and Lakehouse architecture.
• Background in machine learning, AI, or advanced analytics.
• Experience with tools like Apache Airflow, dbt, or Power BI/Tableau.