Sharp Decisions

Senior Databricks Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Databricks Engineer, contract-to-hire, located in Jersey City, NJ (hybrid, 3 days on-site). Requires 10+ years in IT development, strong Azure Databricks expertise, and knowledge of cybersecurity data governance. W2 only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 11, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Metadata #Visualization #Cloud #Azure DevOps #Azure Data Factory #GIT #SQL (Structured Query Language) #Collibra #Python #Data Ingestion #SharePoint #Business Analysis #Data Architecture #Monitoring #ADLS (Azure Data Lake Storage) #Azure Databricks #Azure #Datasets #Azure Active Directory #Data Management #Data Lineage #Logical Data Model #API (Application Programming Interface) #Databricks #SQL Server #Oracle #Batch #Compliance #GDPR (General Data Protection Regulation) #ADF (Azure Data Factory) #Leadership #REST (Representational State Transfer) #SaaS (Software as a Service) #Security #Storage #DevOps #Web Services #Cybersecurity #Jenkins #Jira #Data Modeling #Java #Data Warehouse #Data Governance #"ETL (Extract #Transform #Load)" #Data Lake #Data Quality #Azure Log Analytics #REST API #Vulnerability Management #Azure ADLS (Azure Data Lake Storage)
Role description
Job Title:  Senior Databricks Engineer Jersey City NJ, hybrid 3 days per week on site Contract-to-hire , 2-3 positions W2 only no 3rd parties please Role Description This role will be responsible for designing, developing & deploying cloud data solutions for ISDAD. This is part of the overall cyber data initiative focusing on building out the security and risk data platform for Information Security. This individual will be responsible for developing the data feeds and will be a part of the larger development effort of building out a Cybersecurity Data Lake. The goal of the data lake is to centralize the data as well as establish effective data governance around the data sources and its data lineage using Databricks heavily. This role will collaborate with the developers, data owners, governance leads and business analysts within the Information Technology (IT) department as well as other stakeholders aligned with the applications. Responsibilities • Design, develop, testing & support of cloud data solutions  focusing on data ingestion, data quality, data tuning and performance of upstream cybersecurity data sources using Databricks. • Indirect leadership and mentorship of junior team members. Able to lead data feed development efforts and design initiatives. Participate in development meetings to align development priorities and objectives, assign tasks, and share experiences and challenges with applications under development. • Consult with other technology and development teams as needed to coordinate on the integration of applications with the larger company software ecosystem. • Capture and document metadata for identified Key Data Elements (KDEs) to ensure accuracy and completeness for Data Quality (DQ) rules and processing of daily datasets. • Work with the data architecture team to align KDEs to the logical data models, develop physical data structures, and document physical data names, definitions, and data types. • Partner with the data owners and stakeholders to create technical requirements and DQ rules around the data elements needed in the data warehouse. Partner with the Business Management team and Data Owners to understand what critical metrics and data fields are needed for Metric Dashboards. • Establish views to encapsulate the data so that it is fit for downstream consumption. Ensure that the data aligns with DQ rules established on that metadata, so it is fit for daily use. Utilize IBM TWS production job scheduling system and adhere to standards around the daily scheduling and batch monitoring of production jobs. • Identify and resolve DQ issues including inaccuracies and incomplete information. Enhance data quality efforts by implementing improved procedures and processes. Qualifications and Skills • Strong knowledge of Azure Databricks, Azure Data Factory, Azure Functions, Azure Data Lake storage, Azure Event Grid, Azure Log Analytics, Azure Monitor, Unity catalog repository configuration • 10+ years’ experience in IT development, data governance, data architecture or related roles, preferably in a highly regulated environment such as financial services. • Strong knowledge of CI/CD and DevOps tooling (i.e., Git, Jenkins, Azure DevOps) • Proficient in data management & data modeling tools (e.g., Collibra DQIM/DQE, IBM Infosphere DA). • Proficient in SQL Server, Oracle / PL-SQL, T-SQL and SQL stored procedures • Proficient in Python, Java or similar high-level server-side languages • Strong knowledge of enterprise Information Security data (i.e., Phishing, Identity Management, Privileged Access, Cloud Security, Incident Response, Vulnerability Management, Threat Detection). • Data knowledge of PaaS/SaaS products (i.e., ServiceNow, Crowdstrike, MS Purview, Proofpoint, WIZ.IO, JIRA, SharePoint, Azure Active Directory, SAI360). Knowledge of Microsoft Sentinel for security information and event management (SIEM) is a plus. • Understanding of information security frameworks (i.e., NIST, CIS, CRI Profile) and regulatory compliance (i.e., NYSDFS, GDPR, CCPA). • Experience with REST API web services and microservice architecture. Strong understanding of ETL/ELT. Knowledge of IBM Tivoli Workload Scheduler a plus. • Exposure to PowerBI for data visualization and reporting is a plus. • Problem solving and analytical skills, with an initiative-taking and results oriented approach.