

Azure Databricks Engineer (10+ Years) | W2 | Remote (Within NJ/NY)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Engineer (10+ Years) on a W2 contract, remote within NJ/NY. Key skills include Databricks, Azure DevOps, ETL pipeline development, and strong Python/PySpark expertise. Agile experience is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 30, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
New York, United States
-
π§ - Skills detailed
#Azure Databricks #ERWin #Agile #Cloud #ADF (Azure Data Factory) #Azure DevOps #Azure #Azure cloud #"ETL (Extract #Transform #Load)" #Deployment #PySpark #ML (Machine Learning) #Data Engineering #Data Modeling #Python #Scala #Spark (Apache Spark) #DevOps #CLI (Command-Line Interface) #Databricks
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description Below:-
Title: Data/ML Engineer
Location:- Remote (within NY/NJ)
Required:
β’ Strong knowledge of the Databricks platform (clusters, jobs, workspace permissions, notebooks)
β’ Azure DevOps CI/CD experience
β’ Ability to work in Agile development using the Azure DevOps tool
β’ Hands-on experience with Databricks CLI.
β’ Azure Cloud Engineer
β’ Databricks Data Engineer (ML engineer for ML engineer profile)
β’ Develop and maintain ETL pipelines using ADF and Databricks
β’ Apply data modeling techniques and dimensional modeling using tools like Erwin, ensuring scalability and performance
β’ Familiarity with DAB structure and deployment lifecycle
β’ Familiarity with Unity Catalog, workspace-level governance, and Optimization
β’ Ability to support and troubleshoot non-data engineering workloads (e.g., admin tasks, job failures, permission issues)
β’ Strong Python and PySpark knowledge (Able to create library/framework PyPi packages using CI/CD and use them in a Notebook)
β’ Strong communication skills
β’ Strategic and problem-solving skills
β’ Experience with Jfrog (Itβs optional, good to have)