

Smart IT Frame LLC
Databricks Developer/Data Engineeer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Databricks Developer/Data Engineer contract position, remote, requiring strong Databricks, Apache Spark, and advanced SQL skills. Experience with cloud platforms (Azure/AWS/GCP) and Unix is essential. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 12, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Azure #Data Pipeline #Databricks #Informatica #Storage #Apache Spark #Delta Lake #Scala #Security #AWS (Amazon Web Services) #Cloud #GCP (Google Cloud Platform) #SQL (Structured Query Language) #Spark (Apache Spark) #Unix #Data Quality #"ETL (Extract #Transform #Load)"
Role description
Role: Databricks Developer/Data Engineeer
Location: Remote
Hire type: Contract
Job Summary:
We are looking for a Databricks Developer to design, build, and optimize scalable data pipelines and analytics solutions on the Databricks platform.
Required Skills
• Strong Databricks and Apache Spark experience
• Advanced SQL
• Cloud data platforms (Azure / AWS / GCP)
• Unix, Informatica (preferred)
Key Responsibilities
• Design and implement data pipelines using Databricks & Apache Spark
• Develop notebooks and jobs for ETL/ELT workflows
• Integrate Databricks with cloud storage and Delta Lake
• Optimize queries and pipelines for performance and cost
• Implement data quality, governance, and security standards
• Monitor and troubleshoot production workflows
• Collaborate with engineers, analysts, and stakeholders
Apply today or share profiles at Mario.i@smartitframe.com
Role: Databricks Developer/Data Engineeer
Location: Remote
Hire type: Contract
Job Summary:
We are looking for a Databricks Developer to design, build, and optimize scalable data pipelines and analytics solutions on the Databricks platform.
Required Skills
• Strong Databricks and Apache Spark experience
• Advanced SQL
• Cloud data platforms (Azure / AWS / GCP)
• Unix, Informatica (preferred)
Key Responsibilities
• Design and implement data pipelines using Databricks & Apache Spark
• Develop notebooks and jobs for ETL/ELT workflows
• Integrate Databricks with cloud storage and Delta Lake
• Optimize queries and pipelines for performance and cost
• Implement data quality, governance, and security standards
• Monitor and troubleshoot production workflows
• Collaborate with engineers, analysts, and stakeholders
Apply today or share profiles at Mario.i@smartitframe.com






