

Azure Databricks Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Developer in Pittsburgh, PA, on a contract basis. Requires 10+ years in data engineering, 5+ years with Azure Databricks, and proficiency in PySpark, SQL, and Azure Data Factory.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
August 2, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Pennsylvania
-
π§ - Skills detailed
#Distributed Computing #GIT #SQL (Structured Query Language) #PySpark #Azure #Data Lake #Delta Lake #Data Ingestion #ADLS (Azure Data Lake Storage) #Storage #Computer Science #Data Pipeline #Apache Spark #Big Data #DevOps #Scala #"ETL (Extract #Transform #Load)" #Cloud #ADF (Azure Data Factory) #Data Engineering #Azure ADLS (Azure Data Lake Storage) #Azure Databricks #Data Analysis #Security #Spark SQL #Data Quality #Data Security #Deployment #Spark (Apache Spark) #Databricks #Azure Data Factory
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Type: Contract
Job Category: IT
Job Description
Job Title: Azure Databricks Developer
Location: Pittsburgh, PA (Local to PA candidates only)
Onsite & Contract
Job Description:
We are seeking an experienced Azure Databricks Developer with a strong background in cloud-based data engineering and analytics solutions. The ideal candidate will have hands-on experience in building scalable data pipelines, transforming data, and integrating various services in the Azure ecosystem with a focus on Databricks, Spark, and Azure Data Lake.
Key Responsibilities:
Design and develop scalable data pipelines using Azure Databricks and Apache Spark
Perform data ingestion from multiple sources into Azure Data Lake / Delta Lake
Collaborate with data analysts, architects, and business stakeholders to translate business requirements into technical solutions
Implement CI/CD pipelines and ensure efficient deployment of Databricks notebooks and related components
Work with Azure Data Factory (ADF) for orchestration and integration
Ensure data quality, security, and governance best practices are followed
Monitor and optimize performance of big data workloads
Required Skills & Qualifications:
10+ years of experience in data engineering or software development
5+ years of hands-on experience with Azure Databricks and Apache Spark
Proficiency with PySpark, SQL, and Delta Lake
Experience with Azure Data Factory, Azure Data Lake Storage (Gen2)
Strong understanding of distributed computing and ETL workflows
Familiarity with DevOps practices, Git, and CI/CD pipelines
Solid understanding of data security and governance on Azure platform
Bachelor's degree in Computer Science, Engineering, or a related field
Required Skills SQL Application Developer