

Azure Data Architect
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Architect with 15+ years of experience, focusing on Capital Markets. It offers a hybrid contract in Iselin, NJ, and NYC, with a pay rate of "unknown." Key skills include Python, SQL, Snowflake, and cloud data architecture. Cloud certification is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 26, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New York, NY
-
π§ - Skills detailed
#Data Engineering #Data Pipeline #Datasets #Programming #Data Modeling #Code Reviews #Data Quality #Data Bricks #Delta Lake #Big Data #EDW (Enterprise Data Warehouse) #SQL Queries #Business Analysis #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Architecture #Data Management #Migration #Security #Cloud #Python #Spark (Apache Spark) #Synapse #Scala #Data Warehouse #PySpark #Azure #Databricks #Data Integration #Redshift #SQL Server #Spark SQL #Airflow #Snowflake #ADF (Azure Data Factory) #Data Ingestion
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role: Azure Data Architect
Location: Iselin NJ and NYC NY - 3 Days Hybrid Onsite
Experience: 15+ Years of Experience
Need strong Capital Market Domain experience.
Summary:
The Consultant, Data & Analytics will play a key role in helping clients to unlock the value of their data. The role involves working with clients to understand their data and analytics needs and providing recommendations and solutions for data-driven decision making. The Consultant will work closely with other members of the technology and analytics teams to ensure that projects are delivered on time and within budget. This position is for a Cloud Data engineer/architect with a background in Python, Pyspark, SQL and data warehousing for enterprise level systems. The position calls for someone that is comfortable working with business users along with business analyst expertise.
Skills Required:
β’ Snowflake
β’ SQL
β’ Python
β’ ADF
β’ Azure Data Bricks
β’ PySpark
β’ Data Warehouse Concepts
Skills:
β’ 9+ years Python coding experience.
β’ 5+ years - SQL Server based development of large datasets
β’ 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark.
β’ Experience in any cloud data warehouse like Synapse, Big Query, Redshift, Snowflake.
β’ Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.
β’ Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills.
β’ Experience with Cloud based data architectures, messaging, and analytics.
β’ Cloud certification(s).
β’ Any experience with Airflow is a Plus.
Major Responsibilities:
β’ Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity.
β’ Design, develop, and deploy Spark program in databricks environment to process and analyze large volumes of data.
β’ Experience of Delta Lake, DWH, Data Integration, Cloud, Design and Data Modelling.
β’ Proficient in developing programs in Python and SQL
β’ Experience with Data warehouse Dimensional data modeling.
β’ Working with event based/streaming technologies to ingest and process data.
β’ Working with structured, semi structured and unstructured data.
β’ Optimize Databricks jobs for performance and scalability to handle big data workloads.
β’ Monitor and troubleshoot Databricks jobs, identify and resolve issues or bottlenecks.
β’ Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.
β’ Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process.
β’ Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.