

Azure Data Lead
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Lead, remote, with a contract length of "unknown" and a pay rate of "unknown." Key skills include ADF, PySpark, GitHub actions, and DevOps. Experience with Azure Synapse and pipeline monitoring is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Dataflow #GitHub #Triggers #ADF (Azure Data Factory) #Python #Spark (Apache Spark) #Monitoring #Synapse #Azure #SQL (Structured Query Language) #DevOps #Deployment #SQL Queries #PySpark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Azure Data Lead
Remote
We are looking someone who is having deep understanding of ADF ,Pyspark, native python as well infrastructure related knowledge like, github actions integration, DevOps role, prod-support of ADF pipelines etc, along with below JD :-
Detailed Job Responsibility :-
β’ Development Hands-On with ADF,
β’ Build and create pipelines and Dataflows in ADF,
β’ Understanding business requirement and implement them in Azure Synapse Notebooks using Spark-SQL
β’ Daily deployment of Azure components like dataflows, pipelines ,triggers and notebooks using GITHUB Actions.
β’ Apply logic to build Azure Python Function App as well as spark-sql queries via understanding either business case/existing mapping.
β’ Perform optimization techniques to run Synapse notebooks using best use of spark pools.
β’ Monitoring of existing pipelines running in ADF and Synapse and take appropriate action in case of failure.
β’ Build and amend powerbi dashboard as per business request.