Yochana

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of over 6 months, offering a competitive pay rate. Key skills include Azure Databricks, data governance, SQL, and Python. Experience in financial services and knowledge of information security frameworks are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 14, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Metadata #Cybersecurity #Azure Active Directory #API (Application Programming Interface) #Oracle #Azure ADLS (Azure Data Lake Storage) #Logical Data Model #GIT #Jira #Azure Databricks #Data Governance #Visualization #ADLS (Azure Data Lake Storage) #Databricks #Cloud #Data Modeling #"ETL (Extract #Transform #Load)" #SharePoint #Java #Monitoring #SQL (Structured Query Language) #Datasets #Compliance #Data Architecture #Batch #Computer Science #Data Quality #Jenkins #Azure DevOps #SaaS (Software as a Service) #Web Services #Azure Data Factory #Leadership #DevOps #Collibra #REST API #Azure #Data Management #GDPR (General Data Protection Regulation) #Vulnerability Management #SQL Server #ADF (Azure Data Factory) #Data Lake #Storage #Security #Data Engineering #Python #Azure Log Analytics #REST (Representational State Transfer) #Data Ingestion
Role description
Fulltime This role will be aligned with the Continuous Controls Monitoring program. KCIs (Key Control Indicators) and Data Quality (DQ) rules will be created to continuously assess and report on the effectiveness of ISDAD’s internal controls as part of the firm’s GRC Risk Management and CDO Data Governance frameworks. Role Objectives β€’ Design, develop, testing & support of cloud data solutions for the CyberDW catalogue focusing on data ingestion, data quality, data tuning and performance of upstream cybersecurity data sources. β€’ Indirect leadership and mentorship of junior team members. Able to lead data feed development efforts and design initiatives. Participate in development meetings to align development priorities and objectives, assign tasks, and share experiences and challenges with applications under development. β€’ Consult with other technology and development teams as needed to coordinate on the integration of applications with the larger company software ecosystem. β€’ Capture and document metadata for identified Key Data Elements (KDEs) to ensure accuracy and completeness for Data Quality rules and processing of daily datasets. β€’ Work with the data architecture team to align KDEs to logical data models, develop physical data structures, and document physical data names, definitions, and data types. β€’ Partner with the data owners and stakeholders to create technical requirements and DQ rules around the data elements needed in the CyberDW. Partner with the Business Management team and Data Owners to understand what critical metrics and data fields are needed for Metric Dashboards. β€’ Establish CyberDW views to encapsulate the data so that it is fit for downstream consumption. Ensure that the data aligns with DQ rules established on that metadata, so it is fit for daily use. Utilize IBM TWS production job scheduling system and adhere to standards around the daily scheduling and batch monitoring of production jobs. β€’ Identify and resolve DQ issues including inaccuracies and incomplete information. Enhance data quality efforts by implementing improved procedures and processes. Qualifications And Skills β€’ Bachelor’s degree in Computer Science, Information Security, Data Management, or related field β€’ 10+ years’ experience in IT development, data governance, data architecture or related roles, preferably in a highly regulated environment such as financial services. β€’ 5+ years’ experience in Azure Databricks, Azure Data Factory, Azure Functions, Azure Data Lake storage, Azure Event Grid, Azure Log Analytics, Azure Monitor, Unity catalog repository configuration β€’ Proficient in data management & data modeling tools (e.g., Collibra DQIM/DQE, IBM Infosphere DA). β€’ Strong knowledge of CI/CD and DevOps tooling (i.e., Git, Jenkins, Azure DevOps) β€’ Proficient in SQL Server, Oracle / PL-SQL, T-SQL and SQL stored procedures β€’ Proficient in Python, Java or similar high-level server-side languages β€’ Strong knowledge of enterprise Information Security data (i.e., Phishing, Identity Management, Privileged Access, Cloud Security, Incident Response, Vulnerability Management, Threat Detection). β€’ Data knowledge of PaaS/SaaS products (i.e., ServiceNow, Crowdstrike, MS Purview, Proofpoint, WIZ.IO, JIRA, SharePoint, Azure Active Directory, SAI360). Knowledge of Microsoft Sentinel for security information and event management (SIEM) is a plus. β€’ Understanding of information security frameworks and security controls (i.e., NIST, CIS, CRI Profile) and regulatory compliance (i.e., NYSDFS, GDPR, CCPA). β€’ Experience with REST API web services and microservice architecture. Strong understanding of ETL/ELT. Knowledge of IBM Tivoli Workload Scheduler a plus. β€’ Exposure to PowerBI for data visualization and reporting is a plus. β€’ Problem solving and analytical skills, with an initiative-taking and results oriented approach.