

Saransh Inc
Azure Databricks Engineer with Python & Spark - 2 Positions (Senior & Lead) - Only W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an "Azure Databricks Engineer with Python & Spark" (Senior/Lead) on a W2 contract for 6 months, located in Weehawken, NJ. Key skills include Apache Spark, Python, Azure Data Factory, and ETL/ELT solutions.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 19, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Weehawken, NJ
-
🧠 - Skills detailed
#Apache Spark #Data Management #Data Warehouse #Azure Cosmos DB #Scala #Data Catalog #Data Processing #Data Lineage #Data Lake #Apache Kafka #Azure SQL Data Warehouse #Data Storage #Kafka (Apache Kafka) #Databases #Data Governance #NoSQL #Azure ADLS (Azure Data Lake Storage) #ADF (Azure Data Factory) #Cloud #Azure Databricks #DevOps #Databricks #"ETL (Extract #Transform #Load)" #Storage #SQL (Structured Query Language) #API (Application Programming Interface) #Data Modeling #Data Ingestion #Azure SQL #ADLS (Azure Data Lake Storage) #Azure Data Factory #Azure Synapse Analytics #Spark (Apache Spark) #Data Pipeline #Infrastructure as Code (IaC) #Synapse #AWS (Amazon Web Services) #Big Data #Datasets #Azure #Python #Metadata
Role description
Role: Azure Databricks Engineer with Python & Spark (Senior / Lead)
Location: Weehawken, NJ (Onsite from Day 1)
Job Type: W2 Contract
Note: 2 positions are available - Senior & Lead
Experience Level: Senior OR Lead
Must Have Skills (Mandatory)
• Apache Spark
• Python
• Azure Data Factory
• Azure SQL Microsoft
• Azure Synapse Analytics
• ETL/ELT Solutions
Nice To Have Skills
• Apache Kafka
• CI/CD
Experience & Qualifications
• Expert level understanding of Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service
• Working knowledge of Python
• Expert level skills writing and optimizing complex SQL
• Experience with complex data modeling, ETL design, and using large databases in a business environment
• Experience with building data pipelines and applications to stream and process datasets at low latencies
• Fluent with Big Data technologies like Spark, Kafka and Hive
• Designing and building data pipelines using API ingestion and Streaming ingestion methods
• Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code
• Experience in developing NoSQL solutions using Azure Cosmos DB
• Thorough understanding of Azure and AWS Cloud Infrastructure offerings
Key Responsibilities
• Design and implement scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
• Manage and optimize data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
• Monitor and troubleshoot data-related issues within the Azure environment to maintain high availability and performance
• Automate data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
• Utilize Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making
• Provide guidance and support for data governance, including metadata management, data lineage, and data cataloging
Note: Visa Independent candidates are highly preferred
Role: Azure Databricks Engineer with Python & Spark (Senior / Lead)
Location: Weehawken, NJ (Onsite from Day 1)
Job Type: W2 Contract
Note: 2 positions are available - Senior & Lead
Experience Level: Senior OR Lead
Must Have Skills (Mandatory)
• Apache Spark
• Python
• Azure Data Factory
• Azure SQL Microsoft
• Azure Synapse Analytics
• ETL/ELT Solutions
Nice To Have Skills
• Apache Kafka
• CI/CD
Experience & Qualifications
• Expert level understanding of Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service
• Working knowledge of Python
• Expert level skills writing and optimizing complex SQL
• Experience with complex data modeling, ETL design, and using large databases in a business environment
• Experience with building data pipelines and applications to stream and process datasets at low latencies
• Fluent with Big Data technologies like Spark, Kafka and Hive
• Designing and building data pipelines using API ingestion and Streaming ingestion methods
• Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code
• Experience in developing NoSQL solutions using Azure Cosmos DB
• Thorough understanding of Azure and AWS Cloud Infrastructure offerings
Key Responsibilities
• Design and implement scalable and secure data processing pipelines using Azure Data Factory, Azure Databricks, and other Azure services
• Manage and optimize data storage using Azure Data Lake Storage, Azure SQL Data Warehouse, and Azure Cosmos DB
• Monitor and troubleshoot data-related issues within the Azure environment to maintain high availability and performance
• Automate data pipelines and workflows to streamline data ingestion, processing, and distribution tasks
• Utilize Azure's analytics services, such as Azure Synapse Analytics, to provide insights and support data-driven decision-making
• Provide guidance and support for data governance, including metadata management, data lineage, and data cataloging
Note: Visa Independent candidates are highly preferred






