Greymatter Innovationz

Databricks Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Developer with 5+ years of data engineering experience, offering a long-term contract in Princeton, NJ. Key skills include Apache Spark, SQL, and Databricks platform expertise. Databricks certification and industry experience in pharmaceuticals or finance are preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 19, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Princeton, NJ
-
🧠 - Skills detailed
#Snowflake #Apache Airflow #Azure #ADF (Azure Data Factory) #S3 (Amazon Simple Storage Service) #AWS S3 (Amazon Simple Storage Service) #Version Control #Data Science #Spark (Apache Spark) #Scala #Apache Spark #Cloud #ADLS (Azure Data Lake Storage) #Automation #Data Modeling #Airflow #Spark SQL #GIT #"ETL (Extract #Transform #Load)" #PySpark #Data Lake #Databricks #Azure ADLS (Azure Data Lake Storage) #Data Processing #SQL Queries #Business Analysis #Databases #AWS (Amazon Web Services) #Data Architecture #Data Engineering #Delta Lake #SQL (Structured Query Language) #Synapse #Azure Data Factory #Data Pipeline
Role description
Greymatter Innovationz helps you stay digitally relevant across domains, technologies, and skillsets, every day. We are looking for: Databricks Developer (Contract) Location: Princeton, NJ (On-site/Hybrid) Experience: 5+ years Duration: Long-term Contract About the Role We are looking for a highly skilled Databricks Developer to join our data engineering team in Princeton, NJ. In this role, you will design, develop, and optimize complex data pipelines using the Databricks Lakehouse platform. This is a long-term contract opportunity perfect for someone who thrives in building scalable, high-performance data architectures. Key Responsibilities • Pipeline Development: Design and implement end-to-end data pipelines (ETL/ELT) using PySpark, SQL, and Delta Lake. • Optimization: Performance tune Spark jobs and optimize SQL queries to ensure efficient data processing. • Data Modeling: Build and maintain robust data models (Medallion Architecture: Bronze, Silver, Gold layers). • Integration: Connect Databricks with various data sources (Azure Data Lake, AWS S3, or on-prem databases) and downstream analytics tools. • Automation: Orchestrate workflows using Databricks Workflows or external tools like Apache Airflow/Azure Data Factory. • Collaboration: Work closely with Data Scientists and Business Analysts to transform raw data into actionable insights. Required Qualifications • Experience: Minimum 5 years of professional experience in Data Engineering. • Core Skills: Expert-level proficiency in Apache Spark (PySpark or Scala) and SQL. • Platform Expertise: Proven experience building solutions specifically on the Databricks platform, including Unity Catalog and Delta Live Tables (DLT). • Cloud Platforms: Strong experience with either Azure (ADLS, Synapse) or AWS infrastructure. • Data Warehousing: Solid understanding of Lakehouse architecture and Star/Snowflake schemas. • CI/CD: Experience with version control (Git) and deploying data pipelines via CI/CD tools. Preferred Skills • Databricks Certified Data Engineer Professional. • Experience with real-time streaming using Spark Structured Streaming. • Background in the Pharmaceutical or Financial services industry (highly relevant for the Princeton area). At Greymatter Innovationz, We offer: Motivating Work Environment. Excellent Work Culture. Help you to upgrade yourself to the next level. And More!!!