

AWS Databricks Architect
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Databricks Architect with a contract length of "unknown". Pay rate is "unknown". Key skills include Databricks, AWS, Spark, and ETL/ELT. Requires 12+ years in data architecture and Databricks certification preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 15, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Santa Clara, CA
-
π§ - Skills detailed
#Storage #Microsoft Power BI #Deployment #Tableau #DevOps #Automation #Delta Lake #Snowflake #Metadata #ML (Machine Learning) #BI (Business Intelligence) #Apache Spark #AI (Artificial Intelligence) #Documentation #Data Modeling #Infrastructure as Code (IaC) #API (Application Programming Interface) #Leadership #Migration #Airflow #Spark (Apache Spark) #Data Management #AWS (Amazon Web Services) #Databricks #Cloud #PySpark #Compliance #Data Architecture #Data Governance #Security #Scala #Terraform #Data Engineering #S3 (Amazon Simple Storage Service) #Data Quality #SQL (Structured Query Language) #dbt (data build tool) #"ETL (Extract #Transform #Load)" #Batch #Apache Airflow #MLflow #IAM (Identity and Access Management)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We9re seeking a visionary Data Architect with deep expertise in Databricks to lead the design, implementation, and optimization of our enterprise data architecture. You9ll be instrumental in shaping scalable data solutions that empower analytics, AI, and business intelligence across the organization.
If you thrive in a fast-paced environment, love solving complex data challenges, and have a passion for cloud-native platforms like AWS Databricks, we want to hear from you.
Responsibilities
β’ Design & Architecture: Architect scalable, secure Lakehouse and data platform solutions using Databricks, Spark, Delta Lake, and cloud storage.
β’ Implementation & Development: Lead implementation of ETL/ELT pipelines (batch & real-time), Databricks notebooks, jobs, Structured Streaming, and PySpark/Scala code for production workloads.
β’ Data Modeling & Pipelines: Define canonical data models, schema evolution strategies, and optimized ingestion patterns to support analytics, BI, and ML use cases.
β’ Performance & Cost Optimization: Tune Spark jobs, Delta tables, cluster sizing, and storage to balance performance, latency, and cost-efficiency.
β’ Governance & Compliance: Implement data quality, lineage, access controls, and compliance measures (Unity Catalog, RBAC) to meet internal and regulatory standards.
β’ Migration & Modernization: Lead migration from legacy warehouses to cloud platforms and modern data architectures (Databricks, Snowflake), ensuring minimal disruption.
β’ DevOps & Automation: Define CI/CD and Infrastructure as Code best practices for data platform deployments (Terraform, CI pipelines, Databricks Jobs API).
β’ Leadership & Mentorship: Mentor engineers, drive architecture reviews, and collaborate with stakeholders to translate business needs into technical solutions.
Required Skills & Qualifications
β’ 12+ years in data architecture with 5+ years hands-on experience in Databricks or equivalent Lakehouse platforms.
β’ Strong experience with Snowflake and modern data warehousing patterns.
β’ Cloud & Platform: Proven experience on AWS (preferably with AWS Databricks) β familiarity with S3, IAM, Glue, and networking for secure deployments.
β’ Core Technologies: Deep proficiency in Apache Spark, Delta Lake, PySpark/Scala, SQL, and performance tuning.
β’ Data Engineering: Experience designing ETL/ELT pipelines, data modeling, partitioning strategies, and data quality frameworks.
β’ Automation & DevOps: Familiarity with CI/CD, Terraform (or other IaC), Databricks Jobs API, and pipeline orchestration tools (Airflow, Prefect, dbt).
β’ Governance & Security: Knowledge of data governance, metadata management, lineage, RBAC, and regulatory compliance best practices.
β’ Communication: Excellent stakeholder management, documentation, and cross-functional collaboration skills.
Preferred Qualifications
β’ Databricks Certified Data Engineer or Architect.
β’ Experience with MLflow, Unity Catalog, and Lakehouse architecture.
β’ Background in machine learning, AI, or advanced analytics.
β’ Experience with tools like Apache Airflow, dbt, or Power BI/Tableau
Skills: data,aws,data architecture,databricks