

Databricks Solutions Architect (REMOTE)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Databricks Solutions Architect" on a 6-month remote contract, with a pay rate of "unknown." Key skills include Apache Spark, Delta Lake, and cloud platforms (AWS, Azure, GCP). Requires 7+ years of data architecture experience and strong programming skills in Python and SQL.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
May 22, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Allentown, PA
-
π§ - Skills detailed
#Consul #ADLS (Azure Data Lake Storage) #AI (Artificial Intelligence) #Databricks #GIT #BigQuery #S3 (Amazon Simple Storage Service) #Programming #Tableau #Data Quality #Data Lake #Redshift #Python #Consulting #Looker #Data Science #MLflow #SQL (Structured Query Language) #Cloud #GCP (Google Cloud Platform) #Spark (Apache Spark) #Compliance #Data Engineering #Kafka (Apache Kafka) #Data Ingestion #Microsoft Power BI #Security #Apache Spark #Data Architecture #Snowflake #"ETL (Extract #Transform #Load)" #Scala #Data Lineage #Data Pipeline #Data Governance #BI (Business Intelligence) #IAM (Identity and Access Management) #Data Warehouse #ML (Machine Learning) #DevOps #Delta Lake #Migration #Data Lakehouse #AWS (Amazon Web Services) #Azure
Role description
Job Title: Databricks Solutions Architect
Location: REMOTE in CST
Duration: 6 Months Contract with possible extension
About the Role
We are seeking a highly skilled and experienced Databricks Solutions Architect to lead the design, implementation, and optimization of large-scale data and AI solutions on the Databricks Lakehouse Platform. You will serve as a strategic advisor to clients and internal teams, bridging business needs with technical capabilities, and delivering high-impact data-driven outcomes.
This role requires deep expertise in Apache Spark, Delta Lake, data lakehouse architecture, and cloud ecosystems (AWS, Azure, or GCP). You will also work closely with Data Engineers, Data Scientists, and Business Stakeholders to design end-to-end solutions.
Key Responsibilities
β’ Architect Databricks-based solutions across data ingestion, processing, modeling, machine learning, and analytics use cases.
β’ Design and implement data lakehouse architectures using Delta Lake, Unity Catalog, and MLflow.
β’ Collaborate with data engineering teams to optimize ETL/ELT pipelines, ensuring scalability, performance, and data quality.
β’ Work directly with clients to understand business goals, translate them into technical solutions, and drive adoption of Databricks best practices.
β’ Lead proof-of-concepts (PoCs) and solution design workshops to demonstrate the value of Databricks across domains.
β’ Define and enforce security, governance, and compliance standards in Databricks implementations (e.g., IAM, data lineage, audit trails).
β’ Provide technical guidance on cost optimization, workload migration, and integration with third-party tools (e.g., Tableau, Power BI, Kafka, Snowflake).
β’ Support presales and post-sales technical discussions as needed for client engagements.
β’ Stay current with Databricks product updates, open-source trends (e.g., Apache Spark, Delta Lake, MLflow), and cloud services evolution.
Required Qualifications
β’ 7+ years of experience in data architecture, engineering, or analytics roles, with at least 2+ years as a Databricks Architect or Engineer.
β’ Strong experience with Apache Spark and the Databricks platform (including Delta Live Tables, DBSQL, Unity Catalog).
β’ Hands-on expertise with cloud platforms (AWS, Azure, or GCP), especially their data services (e.g., S3, ADLS, BigQuery, Redshift).
β’ Strong programming skills in Python, SQL, and optionally Scala.
β’ Deep understanding of modern data architectures, including data lakes, data warehouses, and lakehouses.
β’ Experience building end-to-end data pipelines and ML workflows.
β’ Familiarity with CI/CD practices, Git, and DevOps for data.
β’ Strong communication and stakeholder management skills; able to convey technical ideas to business users and vice versa.
Preferred Qualifications
β’ Databricks certification(s): e.g., Databricks Certified Data Engineer Professional, Solutions Architect Associate.
β’ Experience with MLflow, Delta Live Tables, Unity Catalog, and Job Workflows.
β’ Experience with BI tools (Power BI, Tableau, Looker).
β’ Prior consulting or customer-facing solutioning experience.
β’ Experience in regulated industries (e.g., financial services, healthcare) with a focus on compliance and data governance.
Job Title: Databricks Solutions Architect
Location: REMOTE in CST
Duration: 6 Months Contract with possible extension
About the Role
We are seeking a highly skilled and experienced Databricks Solutions Architect to lead the design, implementation, and optimization of large-scale data and AI solutions on the Databricks Lakehouse Platform. You will serve as a strategic advisor to clients and internal teams, bridging business needs with technical capabilities, and delivering high-impact data-driven outcomes.
This role requires deep expertise in Apache Spark, Delta Lake, data lakehouse architecture, and cloud ecosystems (AWS, Azure, or GCP). You will also work closely with Data Engineers, Data Scientists, and Business Stakeholders to design end-to-end solutions.
Key Responsibilities
β’ Architect Databricks-based solutions across data ingestion, processing, modeling, machine learning, and analytics use cases.
β’ Design and implement data lakehouse architectures using Delta Lake, Unity Catalog, and MLflow.
β’ Collaborate with data engineering teams to optimize ETL/ELT pipelines, ensuring scalability, performance, and data quality.
β’ Work directly with clients to understand business goals, translate them into technical solutions, and drive adoption of Databricks best practices.
β’ Lead proof-of-concepts (PoCs) and solution design workshops to demonstrate the value of Databricks across domains.
β’ Define and enforce security, governance, and compliance standards in Databricks implementations (e.g., IAM, data lineage, audit trails).
β’ Provide technical guidance on cost optimization, workload migration, and integration with third-party tools (e.g., Tableau, Power BI, Kafka, Snowflake).
β’ Support presales and post-sales technical discussions as needed for client engagements.
β’ Stay current with Databricks product updates, open-source trends (e.g., Apache Spark, Delta Lake, MLflow), and cloud services evolution.
Required Qualifications
β’ 7+ years of experience in data architecture, engineering, or analytics roles, with at least 2+ years as a Databricks Architect or Engineer.
β’ Strong experience with Apache Spark and the Databricks platform (including Delta Live Tables, DBSQL, Unity Catalog).
β’ Hands-on expertise with cloud platforms (AWS, Azure, or GCP), especially their data services (e.g., S3, ADLS, BigQuery, Redshift).
β’ Strong programming skills in Python, SQL, and optionally Scala.
β’ Deep understanding of modern data architectures, including data lakes, data warehouses, and lakehouses.
β’ Experience building end-to-end data pipelines and ML workflows.
β’ Familiarity with CI/CD practices, Git, and DevOps for data.
β’ Strong communication and stakeholder management skills; able to convey technical ideas to business users and vice versa.
Preferred Qualifications
β’ Databricks certification(s): e.g., Databricks Certified Data Engineer Professional, Solutions Architect Associate.
β’ Experience with MLflow, Delta Live Tables, Unity Catalog, and Job Workflows.
β’ Experience with BI tools (Power BI, Tableau, Looker).
β’ Prior consulting or customer-facing solutioning experience.
β’ Experience in regulated industries (e.g., financial services, healthcare) with a focus on compliance and data governance.