

Synergy Technologies
Need Sr/Lead Senior Data Engineer with Expertise in Databricks and Big Data. :: 15+ Yrs Exp. Required:: 100% Remote
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 15+ years of experience, including 4+ years in Databricks. It offers a 100% remote position, focusing on building scalable data pipelines, optimizing Spark performance, and requires strong SQL and Big Data skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 2, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Apache Spark #Data Engineering #Scala #Data Pipeline #Big Data #PySpark #SQL (Structured Query Language) #Databricks #Data Architecture #Delta Lake #Spark (Apache Spark) #"ETL (Extract #Transform #Load)"
Role description
Role: Senior Data Engineer with expertise in Databricks and Big Data.
Location: Remote USA
NOTES::
Experience required: 15+ years, with a strict requirement of at least 4 years of hands-on Databricks experience.
Key skills needed:
• Databricks (4+ years hands-on)
• Apache Spark / PySpark (6+ years)
• Big Data ecosystem experience
• Strong SQL and data engineering fundamentals
Key responsibilities include:
• Building and optimizing scalable data pipelines using Databricks
• Improving Spark performance and efficiency
• Developing ETL/ELT workflows with Delta Lake
• Contributing to data architecture decisions
• Driving best practices across data engineering teams
Role: Senior Data Engineer with expertise in Databricks and Big Data.
Location: Remote USA
NOTES::
Experience required: 15+ years, with a strict requirement of at least 4 years of hands-on Databricks experience.
Key skills needed:
• Databricks (4+ years hands-on)
• Apache Spark / PySpark (6+ years)
• Big Data ecosystem experience
• Strong SQL and data engineering fundamentals
Key responsibilities include:
• Building and optimizing scalable data pipelines using Databricks
• Improving Spark performance and efficiency
• Developing ETL/ELT workflows with Delta Lake
• Contributing to data architecture decisions
• Driving best practices across data engineering teams






