
Data Engineering (Azure Databricks)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineering (Azure Databricks) position in Pittsburgh, PA, with a focus on building scalable data pipelines. Required skills include Azure Databricks, Spark, ETL/ELT workflows, and data architecture. On-site work is mandatory.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
August 2, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Louisville, KY
-
π§ - Skills detailed
#BI (Business Intelligence) #Data Lakehouse #SQL (Structured Query Language) #PySpark #Azure #Data Lake #Delta Lake #Storage #Data Architecture #Data Pipeline #Scala #Data Governance #"ETL (Extract #Transform #Load)" #Cloud #ADF (Azure Data Factory) #Data Engineering #Azure Databricks #Security #Data Science #Datasets #ML (Machine Learning) #Data Quality #Synapse #Spark (Apache Spark) #Databricks #Azure Blob Storage #Compliance #Azure Data Factory #Azure SQL
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description
We are seeking an experienced Azure Databricks Developer to join our data engineering team in Pittsburgh, PA. The ideal candidate will have strong expertise in building scalable data pipelines and transforming large datasets using Azure Databricks, Spark, and other Azure data services. You will work closely with data architects, analysts, and business stakeholders to design and implement high-performance data solutions that drive business insights.
Responsibilities:
Design, develop, and optimize data pipelines and ETL/ELT workflows using Azure Databricks, Spark, and Azure Data Factory.
Integrate data from multiple sources including on-premise and cloud systems (e.g., Azure Blob Storage, Azure SQL, Synapse).
Implement and maintain scalable data lakehouse architectures leveraging Delta Lake.
Collaborate with data scientists and BI teams to prepare clean and usable datasets for analysis and machine learning models.
Monitor and optimize pipeline performance, ensuring reliability, data quality, and scalability.
Write efficient PySpark or Scala code within the Databricks environment.
Ensure compliance with data governance, security, and privacy policies.