Databricks Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer with 10-12 years of IT experience, focusing on ETL processes, Spark job optimization, and Delta Lake implementation. Contract length and pay rate are unspecified. Key skills include Databricks, Scala, and data quality assurance.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 24, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
New Jersey, United States
🧠 - Skills detailed
#Data Processing #Databricks #Data Engineering #Data Transformations #Security #Scala #Data Pipeline #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Delta Lake #Data Quality #Storage #Data Storage
Role description

Here is the job description:

   • 10-12 years of experience in IT Industry.

   • Designing, developing, and maintaining data pipelines for ETL (Extract, Transform, Load) processes and other data processing workflows.

   • Implementing and optimizing Spark jobs, data transformations, and data processing within the Databricks environment.

   • Working with Delta Lake tables to optimize data storage and query performance.

   • Writing, testing, and deploying code for data solutions, ensuring they are high-performance and scalable.

   • Ensuring data quality, integrity, and security across various data sources and Databricks solutions.