

Saransh Inc
Azure Databricks Engineer with Python & Spark - 2 Positions (Senior & Lead) - Only W2
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Azure Databricks Engineer with Python & Spark" (Senior/Lead) on a W2 contract for 6 months, located in Weehawken, NJ. Key skills include Apache Spark, Python, Azure Data Factory, and ETL/ELT solutions.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 19, 2025
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Weehawken, NJ
-
π§ - Skills detailed
#Apache Spark #Data Management #Data Warehouse #Azure Cosmos DB #Scala #Data Catalog #Data Processing #Data Lineage #Data Lake #Apache Kafka #Azure SQL Data Warehouse #Data Storage #Kafka (Apache Kafka) #Databases #Data Governance #NoSQL #Azure ADLS (Azure Data Lake Storage) #ADF (Azure Data Factory) #Cloud #Azure Databricks #DevOps #Databricks #"ETL (Extract #Transform #Load)" #Storage #SQL (Structured Query Language) #API (Application Programming Interface) #Data Modeling #Data Ingestion #Azure SQL #ADLS (Azure Data Lake Storage) #Azure Data Factory #Azure Synapse Analytics #Spark (Apache Spark) #Data Pipeline #Infrastructure as Code (IaC) #Synapse #AWS (Amazon Web Services) #Big Data #Datasets #Azure #Python #Metadata
Role description
Role: Azure Databricks Engineer with Python & Spark (Senior / Lead)
Location: Weehawken, NJ (Onsite from Day 1)
Job Type: W2 Contract
Note: 2 positions are available - Senior & Lead
Experience Level: Senior OR Lead
Must Have Skills (Mandatory)
β’ Apache Spark
β’ Python
β’ Azure Data Factory
β’ Azure SQL Microsoft
β’ Azure Synapse Analytics
β’ ETL/ELT Solutions
Nice To Have Skills
β’ Apache Kafka
β’ CI/CD
Experience & Qualifications
β’ Expert level understanding of Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service
β’ Working knowledge of Python
β’ Expert level skills writing and optimizing complex SQL
β’ Experience with complex data modeling, ETL design, and using large databases in a business environment
β’ Experience with building data pipelines and applications to stream and process datasets at low latencies
β’ Fluent with Big Data technologies like Spark, Kafka and Hive
β’ Designing and building data pipelines using API ingestion and Streaming ingestion methods
β’ Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code
β’ Experience in developing NoSQL solutions using Azure Cosmos DB
β’ Thorough understanding of Azure and AWS Cloud Infrastructure offerings
Key Responsibilities
β’ Design and implement scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
β’ Manage and optimize data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
β’ Monitor and troubleshoot data-related issues within the Azure environment to maintain high availability and performance
β’ Automate data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
β’ Utilize Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making
β’ Provide guidance and support for data governance, including metadata management, data lineage, and data cataloging
Note: Visa Independent candidates are highly preferred
Role: Azure Databricks Engineer with Python & Spark (Senior / Lead)
Location: Weehawken, NJ (Onsite from Day 1)
Job Type: W2 Contract
Note: 2 positions are available - Senior & Lead
Experience Level: Senior OR Lead
Must Have Skills (Mandatory)
β’ Apache Spark
β’ Python
β’ Azure Data Factory
β’ Azure SQL Microsoft
β’ Azure Synapse Analytics
β’ ETL/ELT Solutions
Nice To Have Skills
β’ Apache Kafka
β’ CI/CD
Experience & Qualifications
β’ Expert level understanding of Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service
β’ Working knowledge of Python
β’ Expert level skills writing and optimizing complex SQL
β’ Experience with complex data modeling, ETL design, and using large databases in a business environment
β’ Experience with building data pipelines and applications to stream and process datasets at low latencies
β’ Fluent with Big Data technologies like Spark, Kafka and Hive
β’ Designing and building data pipelines using API ingestion and Streaming ingestion methods
β’ Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code
β’ Experience in developing NoSQL solutions using Azure Cosmos DB
β’ Thorough understanding of Azure and AWS Cloud Infrastructure offerings
Key Responsibilities
β’ Design and implement scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
β’ Manage and optimize data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
β’ Monitor and troubleshoot data-related issues within the Azure environment to maintain high availability and performance
β’ Automate data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
β’ Utilize Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making
β’ Provide guidance and support for data governance, including metadata management, data lineage, and data cataloging
Note: Visa Independent candidates are highly preferred






