

Databricks Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Developer with 7–10 years of experience, offering a competitive pay rate. The contract length is unspecified. Key skills include Advanced SQL, Pyspark, and Azure. Experience with data migration and Agile methodologies is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
360
-
🗓️ - Date discovered
June 12, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Santa Clara, CA
-
🧠 - Skills detailed
#Data Modeling #Databases #Data Migration #Data Lake #PySpark #Cloud #Monitoring #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Access #Migration #Azure #Databricks #Data Warehouse #PostgreSQL #Impala #Data Lakehouse #Agile #Spark (Apache Spark) #Python
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Databricks JD
We are seeking multiple positions (5–7, 7–10 years of experience) in Databricks along with Adv SQL Knowledge.
The primary responsibility of this role is to provide Technical support for the Databricks Environment.
Key Responsibilities:
• Bug Fixes in the Databricks environment
• Ability to Monitor, Transform and optimize ETL pipelines for Databricks and Knowledge of Data Lakehouse Architecture and knowledge of Pyspark (At least Mid Level)
• Experience in complex data migration and familiarity with the knowledge is a plus
• Ensure data accessibility and integrity for the migrated objects
• Collaborate effectively with cross-functional teams
• Communicate progress and challenges clearly to stakeholders
Qualifications:
• Experience in SQL and BigData
• Proficiency in Spark and Impala/Hive
• Experience with Databricks and cloud platforms, particularly Azure
• Good understanding of data modeling concepts and data warehouse designs
• Excellent problem-solving skills and a passion for data accessibility
• Effective communication and collaboration skills
• Experience with Agile methodologies
Data Platform –
SQL, Pyspark, Python, Databricks, Job Monitoring using Redwood, Other Open Source Relational Databases like PostgreSQL etc