

RSA Tech
Senior Databricks Architect - W2 Position
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Databricks Architect in Edison, NJ, with a contract length of over 6 months and a pay rate of "$X per hour". Candidates must have 12+ years of experience, expertise in Databricks, Apache Spark, and cloud platforms (AWS/Azure).
๐ - Country
United States
๐ฑ - Currency
Unknown
-
๐ฐ - Day rate
Unknown
-
๐๏ธ - Date
February 12, 2026
๐ - Duration
More than 6 months
-
๐๏ธ - Location
On-site
-
๐ - Contract
W2 Contractor
-
๐ - Security
Unknown
-
๐ - Location detailed
Edison, NJ 08817
-
๐ง - Skills detailed
#Azure #Data Lake #Migration #PySpark #SQL (Structured Query Language) #Data Governance #AWS (Amazon Web Services) #Apache Spark #"ETL (Extract #Transform #Load)" #Security #Data Migration #Data Orchestration #Spark (Apache Spark) #Cloud #Data Architecture #ADF (Azure Data Factory) #Data Modeling #Data Engineering #Delta Lake #Data Processing #Azure cloud #Databricks #Leadership #Data Lakehouse #Scala #Airflow
Role description
Note: Currently we are unable to Sponsor. We Encourage to apply USC & GC
โข
โข
โข
โข
โข Job Title: Databricks ArchitectLocation: Edison, NJ (Onsite)Experience: 12+ Years
Must Have: Databricks Architect Exp, Data Lakehouse, ETL/ELT, Apache Spark, PySpark, Delta Lake, and SQL.
Job Summary:We are seeking a highly skilled Databricks Architect to design, build, and optimize scalable data platforms on Databricks. The ideal candidate will lead architecture, implementation, and performance optimization of enterprise data solutions using modern data engineering and cloud technologies.
Key Responsibilities:
ยท Design and implement end-to-end data architecture using Databricks Lakehouse.
ยท Lead data migration, modernization, and optimization initiatives.
ยท Develop scalable ETL/ELT pipelines using Spark, PySpark, and Delta Lake.
ยท Define data modeling, governance, security, and performance best practices.
ยท Collaborate with business, engineering, and analytics teams to deliver data solutions.
ยท Optimize workloads for performance, cost, and reliability in cloud environments (AWS/Azure).
ยท Provide technical leadership, architecture reviews, and mentoring.
Required Skills:
ยท 6+ years of hands-on Databricks architecture experience.
ยท Strong expertise in Apache Spark, PySpark, Delta Lake, and SQL.
ยท Experience with Data Lakehouse, data modeling, and large-scale data processing.
ยท Hands-on experience with AWS or Azure cloud platforms.
ยท Knowledge of CI/CD, data orchestration (ADF/Airflow), and performance tuning.
ยท Strong understanding of data governance, security, and best practices.
Job Types: Full-time, Contract
Work Location: In person
Note: Currently we are unable to Sponsor. We Encourage to apply USC & GC
โข
โข
โข
โข
โข Job Title: Databricks ArchitectLocation: Edison, NJ (Onsite)Experience: 12+ Years
Must Have: Databricks Architect Exp, Data Lakehouse, ETL/ELT, Apache Spark, PySpark, Delta Lake, and SQL.
Job Summary:We are seeking a highly skilled Databricks Architect to design, build, and optimize scalable data platforms on Databricks. The ideal candidate will lead architecture, implementation, and performance optimization of enterprise data solutions using modern data engineering and cloud technologies.
Key Responsibilities:
ยท Design and implement end-to-end data architecture using Databricks Lakehouse.
ยท Lead data migration, modernization, and optimization initiatives.
ยท Develop scalable ETL/ELT pipelines using Spark, PySpark, and Delta Lake.
ยท Define data modeling, governance, security, and performance best practices.
ยท Collaborate with business, engineering, and analytics teams to deliver data solutions.
ยท Optimize workloads for performance, cost, and reliability in cloud environments (AWS/Azure).
ยท Provide technical leadership, architecture reviews, and mentoring.
Required Skills:
ยท 6+ years of hands-on Databricks architecture experience.
ยท Strong expertise in Apache Spark, PySpark, Delta Lake, and SQL.
ยท Experience with Data Lakehouse, data modeling, and large-scale data processing.
ยท Hands-on experience with AWS or Azure cloud platforms.
ยท Knowledge of CI/CD, data orchestration (ADF/Airflow), and performance tuning.
ยท Strong understanding of data governance, security, and best practices.
Job Types: Full-time, Contract
Work Location: In person






