

STAFFWORXS
Senior Data Architect
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Architect with 10-15 years of experience, located in San Jose, California. The contract length is unspecified, with a focus on Databricks, data architecture, ETL/ELT pipelines, and strong leadership skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
March 3, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Jose, CA
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Delta Lake #Observability #Data Modeling #Databricks #Leadership #Compliance #Migration #Data Engineering #Documentation #Scala #Data Architecture #Data Layers #Data Pipeline #Security #Data Quality #Cloud #Version Control #DevOps #Spark (Apache Spark)
Role description
Job Title: Data Architect
Experience: 10 to 15 years
Location: San Jose, California , United States.
Job Description:
We are looking for a Data Architecture & Engineering Lead who can define the long‑term data vision, build scalable data platforms, and champion data adoption across the organization. This hybrid role blends deep technical expertise in Databricks and modern data engineering with strong architectural leadership and data advocacy. The ideal candidate will shape our data ecosystem, drive standardization, and ensure teams across Adobe can confidently leverage high‑quality, well‑governed data for decision‑making.
- Design and evolve the enterprise‑wide data architecture using Databricks, Delta Lake, and cloud‑native patterns
- Define standards for data modeling, ingestion, transformation, governance, and lineage
- Establish architectural guardrails, reusable frameworks, and best practices
- Lead modernization initiatives, including migration from legacy pipelines to unified lakehouse architecture
- Ensure compliance with privacy, security, and regulatory requirements
- Build and optimize scalable ETL/ELT pipelines using Databricks, Spark, and Delta Lake
- Develop Bronze/Silver/Gold data layers with strong data quality and observability
- Automate workflows using Databricks Jobs, Workflows, and orchestration tools
- Implement CI/CD, version control, and DevOps practices for data pipelines
- Troubleshoot production issues and continuously improve performance and cost efficiency
- Support dashboard rationalization by consolidating redundant pipelines and metrics
- Champion data literacy and promote consistent use of standardized KPIs and data models
- Partner with business, analytics, and engineering teams to ensure data products meet real user needs
- Drive adoption of unified dashboards, rationalized metrics, and consistent reporting frameworks
- Conduct enablement sessions, documentation, and demos to improve data usability
- Identify gaps in data quality, accessibility, or clarity and work with teams to resolve them
- Serve as a connector between product, engineering, analytics, and business stakeholders
Job Title: Data Architect
Experience: 10 to 15 years
Location: San Jose, California , United States.
Job Description:
We are looking for a Data Architecture & Engineering Lead who can define the long‑term data vision, build scalable data platforms, and champion data adoption across the organization. This hybrid role blends deep technical expertise in Databricks and modern data engineering with strong architectural leadership and data advocacy. The ideal candidate will shape our data ecosystem, drive standardization, and ensure teams across Adobe can confidently leverage high‑quality, well‑governed data for decision‑making.
- Design and evolve the enterprise‑wide data architecture using Databricks, Delta Lake, and cloud‑native patterns
- Define standards for data modeling, ingestion, transformation, governance, and lineage
- Establish architectural guardrails, reusable frameworks, and best practices
- Lead modernization initiatives, including migration from legacy pipelines to unified lakehouse architecture
- Ensure compliance with privacy, security, and regulatory requirements
- Build and optimize scalable ETL/ELT pipelines using Databricks, Spark, and Delta Lake
- Develop Bronze/Silver/Gold data layers with strong data quality and observability
- Automate workflows using Databricks Jobs, Workflows, and orchestration tools
- Implement CI/CD, version control, and DevOps practices for data pipelines
- Troubleshoot production issues and continuously improve performance and cost efficiency
- Support dashboard rationalization by consolidating redundant pipelines and metrics
- Champion data literacy and promote consistent use of standardized KPIs and data models
- Partner with business, analytics, and engineering teams to ensure data products meet real user needs
- Drive adoption of unified dashboards, rationalized metrics, and consistent reporting frameworks
- Conduct enablement sessions, documentation, and demos to improve data usability
- Identify gaps in data quality, accessibility, or clarity and work with teams to resolve them
- Serve as a connector between product, engineering, analytics, and business stakeholders





