Cliff Services Inc

W2 - Sr. Databricks Engineer (PySpark and Data Lake) || New Jersey / Plano, TX

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a W2 - Sr. Databricks Engineer in New Jersey or Plano, TX, on a long-term contract. Requires 8+ years of data engineering experience, strong PySpark and Databricks skills, and expertise in ETL/ELT pipeline development.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date
May 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Plano, TX
-
🧠 - Skills detailed
#Data Quality #Data Lake #Batch #Apache Spark #Azure #Scala #Databricks #Strategy #Snowflake #Data Lineage #Migration #Data Governance #Web Services #Data Engineering #"ETL (Extract #Transform #Load)" #Data Pipeline #Data Processing #Deployment #Spark (Apache Spark) #UAT (User Acceptance Testing) #Python #AWS (Amazon Web Services) #Ab Initio #PySpark #Cloud #Microsoft Azure
Role description
Hiring: Senior Databricks Engineer (PySpark + Data Lake) Location: NJ / Plano, TX (5 Days Onsite Any JPMC Office Location) Employment Type: W2 Duration: Long-Term Contract Openings: 5 Positions We are actively hiring experienced Senior Databricks Engineers to support a large-scale data modernization initiative. This role focuses on migrating legacy ETL workflows from Ab Initio to cloud-native Databricks pipelines using Apache Spark (PySpark). Key Responsibilities Analyze and migrate legacy ETL workflows from Ab Initio to PySpark Design, develop, and optimize scalable data pipelines on Databricks Build and maintain ETL/ELT pipelines integrating data from Snowflake and other enterprise sources Support batch and near real-time data processing Create data lineage, data flow diagrams, and optimize data processes Develop unit, integration, and reconciliation testing frameworks Support deployment, migration strategy, and production cutovers Work with scheduling tools like Control-M Collaborate with architects, analysts, and business stakeholders for UAT/FAT sign-offs Required Skills 8+ years of Data Engineering experience Strong hands-on experience with PySpark and Databricks Experience with Ab Initio to PySpark migration Expertise in ETL/ELT pipeline development Strong knowledge of Data Lakes, Data Warehousing, and Snowflake Experience in Data Lineage, Data Governance, and Data Quality Strong coding skills in Python Experience with batch and real-time processing frameworks Preferred Experience in Banking/Financial Services domain Experience with cloud platforms like Amazon Web Services / Microsoft Azure Strong troubleshooting and production support experience