Nasscomm

Databricks Engineer (Only W2)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer (Only W2) with a contract length of "unknown" and a pay rate of "unknown." Requires 5+ years in Data Engineering, expertise in Databricks on cloud platforms, and strong skills in data pipelines, CI/CD, and data warehousing.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Quality #AI (Artificial Intelligence) #Data Lake #Leadership #Jenkins #Apache Spark #Compliance #Azure #Scala #Databricks #ML (Machine Learning) #GCP (Google Cloud Platform) #Apache Airflow #Airflow #Storage #Data Engineering #Azure DevOps #Data Pipeline #Spark (Apache Spark) #DevOps #AWS (Amazon Web Services) #PySpark #Security #Automation #Cloud #Metadata #Delta Lake
Role description
• 5+ years of hands-on experience in Data Engineering with strong expertise in Databricks on AWS, Azure, or GCP cloud platforms. • Strong knowledge of Lakehouse architecture, Apache Spark, Delta Lake, PySpark, and enterprise data lake solutions. • Experience designing and maintaining scalable data pipelines, distributed compute platforms, and cloud-native storage solutions. • Hands-on expertise with Databricks tools including Delta Live Tables, Autoloader, Structured Streaming, Databricks Workflows, and Apache Airflow. • Skilled in data warehousing concepts including 3NF, dimensional modeling, metadata-driven ingestion, and data quality frameworks. • Experience implementing Unity Catalog, fine-grained security, governance, compliance, and access control strategies. • Strong understanding of CI/CD pipelines using Azure DevOps, Jenkins, AWS Code Pipeline, TFS, or PowerShell automation. • Proven experience in performance tuning and optimization of Spark/Databricks pipelines, code, and compute resources. • Leadership experience managing cross-functional teams and enterprise-scale data projects. • Exposure to Databricks Lakeflow and AI/ML technologies is a plus.