

Azure Databricks Developer (Python, PySpark, SQL)-6
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Developer (Python, PySpark, SQL) on a long-term contract in Pittsburgh, PA, requiring 8-10+ years of IT experience, 5+ years with Azure Databricks, and strong skills in ETL, data processing, and cloud environments.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
September 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Pennsylvania
-
π§ - Skills detailed
#Data Lake #Spark (Apache Spark) #Data Processing #Azure Databricks #"ETL (Extract #Transform #Load)" #PySpark #Spark SQL #Azure cloud #Scala #Azure #Compliance #Debugging #Delta Lake #Datasets #SQL (Structured Query Language) #Data Engineering #Data Pipeline #Security #ADF (Azure Data Factory) #DevOps #Data Quality #Databricks #Azure Data Factory #Cloud #MLflow #Python
Role description
Job Type: Contract
Job Category: IT
Job Description
Job Title: Azure Databricks Developer (Python, PySpark, SQL) Duration: Long-Term Contract
Job Description: We are seeking an experienced Azure Databricks Developer with strong expertise in Python, PySpark, and SQL to join our team in Pittsburgh, PA. The ideal candidate will have hands-on experience building and optimizing data pipelines, working with large-scale datasets, and deploying solutions in Azure cloud environments. This role requires onsite presence 5 days a week in Pittsburgh, PA.
Responsibilities:
Design, develop, and optimize data pipelines and ETL workflows using Azure Databricks, PySpark, and SQL.
Work with stakeholders to understand business requirements and translate them into scalable data solutions.
Implement data processing, transformation, and cleansing for large-scale structured and unstructured data.
Collaborate with data engineers, analysts, and architects to build reliable and high-performing data platforms.
Ensure data quality, security, and compliance across all solutions.
Monitor, debug, and optimize performance of Databricks clusters and jobs.
Required Skills:
Overall IT Experience: 8-10+ Years
5+ years of hands-on experience with Azure Databricks.
Strong expertise in Python, PySpark, and SQL.
Solid experience with ETL pipeline design and data processing in cloud environments.
Good understanding of Azure Data Lake, Azure Data Factory, and related Azure services.
Experience working with large, complex datasets and performance tuning.
Strong problem-solving and debugging skills.
Nice to Have:
Experience with Delta Lake, MLflow, or Spark Streaming.
Familiarity with CI/CD pipelines and DevOps practices in Azure.
Exposure to financial services, healthcare, or large-scale enterprise environments.
Required Skills Cloud Developer SQL Application Developer
Job Type: Contract
Job Category: IT
Job Description
Job Title: Azure Databricks Developer (Python, PySpark, SQL) Duration: Long-Term Contract
Job Description: We are seeking an experienced Azure Databricks Developer with strong expertise in Python, PySpark, and SQL to join our team in Pittsburgh, PA. The ideal candidate will have hands-on experience building and optimizing data pipelines, working with large-scale datasets, and deploying solutions in Azure cloud environments. This role requires onsite presence 5 days a week in Pittsburgh, PA.
Responsibilities:
Design, develop, and optimize data pipelines and ETL workflows using Azure Databricks, PySpark, and SQL.
Work with stakeholders to understand business requirements and translate them into scalable data solutions.
Implement data processing, transformation, and cleansing for large-scale structured and unstructured data.
Collaborate with data engineers, analysts, and architects to build reliable and high-performing data platforms.
Ensure data quality, security, and compliance across all solutions.
Monitor, debug, and optimize performance of Databricks clusters and jobs.
Required Skills:
Overall IT Experience: 8-10+ Years
5+ years of hands-on experience with Azure Databricks.
Strong expertise in Python, PySpark, and SQL.
Solid experience with ETL pipeline design and data processing in cloud environments.
Good understanding of Azure Data Lake, Azure Data Factory, and related Azure services.
Experience working with large, complex datasets and performance tuning.
Strong problem-solving and debugging skills.
Nice to Have:
Experience with Delta Lake, MLflow, or Spark Streaming.
Familiarity with CI/CD pipelines and DevOps practices in Azure.
Exposure to financial services, healthcare, or large-scale enterprise environments.
Required Skills Cloud Developer SQL Application Developer