

Qualis1 Inc.
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 14+ years of IT experience, requiring strong Databricks and Spark expertise. It offers a remote contract position, focusing on ETL pipeline development, Delta Lake, and Spark optimization. Pay rate is unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Scala #PySpark #Databricks #Data Quality #Delta Lake #Apache Spark #Spark (Apache Spark) #Data Engineering
Role description
Hiring: Senior / Lead Databricks Data Engineer (Remote)
We are hiring an experienced Databricks Data Engineer for a 100% remote role.
⚠️ Mandatory Requirement:
Only candidates with 14+ years of overall IT experience and strong end-to-end Databricks & Spark experience will be considered. Profiles below 14 years are not workable.
Required Skills
• Databricks (hands-on, production experience)
• Apache Spark (PySpark / Scala – end to end)
• Delta Lake & Delta Live Tables (DLT)
• End-to-end ETL/ELT pipeline development
• Medallion Architecture (Bronze/Silver/Gold)
• Spark performance tuning & optimization
• Unity Catalog, Auto Loader, CI/CD basics
Role Highlights
• Design and build scalable Databricks Lakehouse solutions
• Develop and optimize Spark-based pipelines
• Implement DLT with data quality and governance standards
• Collaborate with stakeholders and lead data initiatives
Hiring: Senior / Lead Databricks Data Engineer (Remote)
We are hiring an experienced Databricks Data Engineer for a 100% remote role.
⚠️ Mandatory Requirement:
Only candidates with 14+ years of overall IT experience and strong end-to-end Databricks & Spark experience will be considered. Profiles below 14 years are not workable.
Required Skills
• Databricks (hands-on, production experience)
• Apache Spark (PySpark / Scala – end to end)
• Delta Lake & Delta Live Tables (DLT)
• End-to-end ETL/ELT pipeline development
• Medallion Architecture (Bronze/Silver/Gold)
• Spark performance tuning & optimization
• Unity Catalog, Auto Loader, CI/CD basics
Role Highlights
• Design and build scalable Databricks Lakehouse solutions
• Develop and optimize Spark-based pipelines
• Implement DLT with data quality and governance standards
• Collaborate with stakeholders and lead data initiatives






