Hays

Databricks Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer with a 4-month contract, located in Englewood Cliffs, NJ or New York City, NY. Requires 5+ years of ETL/ELT pipeline experience, proficiency in Python, SQL, and PySpark, and cloud platform knowledge.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 4, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Englewood Cliffs, NJ
-
🧠 - Skills detailed
#GIT #Delta Lake #Apache Spark #Python #AWS (Amazon Web Services) #Databricks #Data Pipeline #Data Engineering #DevOps #SQL (Structured Query Language) #GCP (Google Cloud Platform) #Spark (Apache Spark) #Version Control #Data Lake #"ETL (Extract #Transform #Load)" #Data Lakehouse #Azure #PySpark #Scala #Data Modeling #Cloud #Data Science
Role description
Job Title: Databricks Data Engineer Location: Englewood Cliffs, NJ or New York City, NY/ Hybrid Contract length: 4 Months 5+ Years of Design and implement ETL/ELT pipelines using Databricks and Apache Spark. • Strong proficiency in Python, SQL, and PySpark. • Knowledge of Delta Lake, data lakehouse concepts, and streaming data. • Familiarity with CI/CD pipelines, version control (Git), and DevOps practices. • Understanding of data modeling, data warehousing, and performance tuning. • Develop and maintain data lakehouse architectures for structured and unstructured data. • Optimize data workflows for performance, scalability, and cost efficiency. • Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions. • Monitor and troubleshoot data pipelines, ensuring reliability and accuracy. • Integrate Databricks with cloud services (AWS, Azure, or GCP) and other enterprise systems. • Hands-on experience with cloud platforms (AWS, Azure, or GCP). Thanks