F2F Interview || Senior Azure Data Engineer || Weehawken NJ (ONSITE) || Contract

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Azure Data Engineer in Weehawken, NJ, on a long-term contract. Requires 8+ years of experience in investment banking, strong data modeling, SQL, and expertise in Azure technologies, including ADLS, Azure Synapse, and Databricks.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 30, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Weehawken, NJ
-
🧠 - Skills detailed
#Programming #Python #Oracle #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Data Modeling #Azure #Cloud #Azure Data Factory #Azure Databricks #SQL Server #Big Data #ADLS (Azure Data Lake Storage) #Databricks #Datasets #ADF (Azure Data Factory) #Scala #Synapse #Compliance #Data Engineering #Spark (Apache Spark) #Azure cloud #Azure Synapse Analytics #PySpark #Data Processing
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, iPeople Infosystems LLC, is seeking the following. Apply via Dice today! Role: Senior Azure Data Engineer with Strong Data Modeling and SQL (Investment Banking exp) Location: Weehawken NJ (ONSITE) (F2F Interview is MUST) Contract: Long Term Exp: 8+ Years Job Description: Senior Azure Data Engineer Investment Banking β€’ Design and implement scalable data solutions using ADLS, Azure Synapse Analytics, and Azure Databricks to support high-volume financial data processing and analytics. β€’ excellent hands on / programming skills on big data analytics azure cloud & on-prem technologies - spark, python, azure synapse, data factory , azure databricks. β€’ Develop and optimize data models for structured and semi-structured financial datasets, ensuring performance, accuracy, and compliance with investment banking standards. β€’ Build robust ETL pipelines using Azure Data Factory and PySpark to automate ingestion, transformation, and validation of data from diverse sources including Oracle, SQL Server, and cloud-native platforms. β€’ Engineer RESTful APIs and Python-based frameworks to expose curated datasets for downstream consumption by analytics and reporting teams. β€’ understanding of data modelling, data warehousing principles and lakehouse architecture. β€’ advanced working knowledge of sql with experience in dwh / etl implementation and understand the etl/elt design patterns.