

CyberX Info System
Sr. Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer, requiring MDM experience, with a contract length of unspecified duration and a pay rate of "unknown." Located in NJ/NYC, candidates must have strong Azure and Databricks expertise, focusing on data pipeline development and optimization.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 30, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Version Control #SQL (Structured Query Language) #Data Pipeline #Scala #Databases #Data Security #Azure #PySpark #Triggers #Azure Blob Storage #Datasets #Azure cloud #"ETL (Extract #Transform #Load)" #Data Engineering #Spark (Apache Spark) #Delta Lake #BI (Business Intelligence) #Security #ADLS (Azure Data Lake Storage) #Deployment #Databricks #MDM (Master Data Management) #Storage #Compliance #GitLab #Cloud #Data Processing #Big Data
Role description
Sr. Data Engineering Lead/Architect/Engineer
NJ/NYC Only
Persistent/MIZUHO
Must have the MDM experience.
Job Summary:
We are seeking highly skilled Azure Data Engineer with strong expertise in Databricks to join our data team. The ideal candidate will design, implement and optimize large-scale data pipeline, ensuring scalability, reliability and performance. This role involves working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions.
Key Responsibilities:
1. Data Pipeline Development:
• Build and maintain scalable ETL/ELT pipelines using Databricks.
• Leverage PySpark/Spark and SQL to transform and process large datasets.
• Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems.
1. Collaboration & Analysis:
• Work Closely with multiple teams to prepare data for dashboard and BI Tools.
• Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.
1. Performance & Optimization:
• Optimize Databricks workloads for cost efficiency and performance.
• Monitor and troubleshoot data pipelines to ensure reliability and accuracy.
1. Governance & Security:
• Implement and manage data security, access controls and governance standards using Unity Catalog.
• Ensure compliance with organizational and regulatory data policies.
1. Deployment:
• Leverage Databricks Asset Bundles for seamless deployment of Databricks jobs, notebooks and configurations across environments.
• Manage version control for Databricks artifacts and collaborate with team to maintain development best practices.
Technical Skills:
• Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime etc.)
• Proficiency in Azure Cloud Services.
• Solid Understanding of Spark and PySpark for big data processing.
• Experience in relational databases.
• Knowledge on Databricks Asset Bundles and GitLab.
Preferred Experience:
• Familiarity with Databricks Runtimes and advanced configurations.
• Knowledge of streaming frameworks like Spark Streaming.
• Experience in developing real-time data solutions.
Certifications:
• Azure Data Engineer Associate or Databricks certified Data Engineer Associate certification. (Optional)
Sr. Data Engineering Lead/Architect/Engineer
NJ/NYC Only
Persistent/MIZUHO
Must have the MDM experience.
Job Summary:
We are seeking highly skilled Azure Data Engineer with strong expertise in Databricks to join our data team. The ideal candidate will design, implement and optimize large-scale data pipeline, ensuring scalability, reliability and performance. This role involves working closely with multiple teams and business stakeholders to deliver cutting-edge data solutions.
Key Responsibilities:
1. Data Pipeline Development:
• Build and maintain scalable ETL/ELT pipelines using Databricks.
• Leverage PySpark/Spark and SQL to transform and process large datasets.
• Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems.
1. Collaboration & Analysis:
• Work Closely with multiple teams to prepare data for dashboard and BI Tools.
• Collaborate with cross-functional teams to understand business requirements and deliver tailored data solutions.
1. Performance & Optimization:
• Optimize Databricks workloads for cost efficiency and performance.
• Monitor and troubleshoot data pipelines to ensure reliability and accuracy.
1. Governance & Security:
• Implement and manage data security, access controls and governance standards using Unity Catalog.
• Ensure compliance with organizational and regulatory data policies.
1. Deployment:
• Leverage Databricks Asset Bundles for seamless deployment of Databricks jobs, notebooks and configurations across environments.
• Manage version control for Databricks artifacts and collaborate with team to maintain development best practices.
Technical Skills:
• Strong expertise in Databricks (Delta Lake, Unity Catalog, Lakehouse Architecture, Table Triggers, Delta Live Pipelines, Databricks Runtime etc.)
• Proficiency in Azure Cloud Services.
• Solid Understanding of Spark and PySpark for big data processing.
• Experience in relational databases.
• Knowledge on Databricks Asset Bundles and GitLab.
Preferred Experience:
• Familiarity with Databricks Runtimes and advanced configurations.
• Knowledge of streaming frameworks like Spark Streaming.
• Experience in developing real-time data solutions.
Certifications:
• Azure Data Engineer Associate or Databricks certified Data Engineer Associate certification. (Optional)






