

Brooksource
Azure Databricks Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Developer, offering a contract of unspecified length and a competitive pay rate. Key skills include Azure Databricks, Python, C#, and API development. Experience in data engineering and MDM is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
960
-
🗓️ - Date
April 17, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Louisville, KY
-
🧠 - Skills detailed
#Apache Kafka #Datasets #Agile #Computer Science #C# #Data Engineering #API (Application Programming Interface) #Cloud #Scala #Apache Spark #Data Management #Automation #SaaS (Software as a Service) #Spark (Apache Spark) #ML (Machine Learning) #Java #Databricks #Azure Databricks #Databases #.Net #PySpark #Scrum #Data Architecture #MDM (Master Data Management) #Python #Spatial Data #AI (Artificial Intelligence) #Azure #Kafka (Apache Kafka) #Debugging #Quality Assurance #Data Ingestion
Role description
Azure Databricks Developer
Brooksource
Fortune 50 Health Insurance Client
Overview
We are seeking a highly skilled Azure Databricks Developer to design, build, and optimize scalable data solutions within a cloud-based SaaS environment. This role is a key contributor to an enterprise-wide Master Data Management (MDM) platform that serves as the system of record across multiple business domains.
You will play a critical role in developing high-performance data ingestion pipelines, integrating diverse source systems, and enabling seamless data flow (ingress and egress) across a large-scale ecosystem. The platform supports millions of records—including customer and insurance profile data—and is central to organizational decision-making.
This is an excellent opportunity for a hands-on data engineer who thrives in a fast-paced, collaborative environment and enjoys working on high-impact, enterprise-scale data platforms.
Key Responsibilities
• Design and implement high-performance data ingestion pipelines using Azure Databricks and Apache Spark
• Build scalable, reusable frameworks for ingesting and processing large and complex datasets, including geospatial data
• Integrate multiple source systems into a centralized SaaS-based MDM platform
• Enable robust data ingress and egress capabilities across enterprise applications
• Develop and maintain APIs and system integrations for cross-platform data exchange
• Support and enhance Master Data Management (MDM) processes to ensure a consistent “golden record” across systems
• Deliver and present proofs of concept (POCs) to stakeholders for new technologies and solutions
• Implement and uphold quality assurance standards, including testing, debugging, and performance optimization
• Troubleshoot complex data and software issues, identifying root causes and implementing solutions
• Collaborate with cross-functional teams including product, engineering, and business stakeholders
• Apply automation and AI-driven techniques to improve efficiency and scalability
• Participate in Agile ceremonies (daily scrums, sprint planning, retrospectives)
Required Qualifications
• Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience)
• 3–5 years of experience in software or data engineering
• Hands-on experience with:
• Azure Databricks
• Python and PySpark
• C#
• API development and system integration
• Apache Kafka
• Strong understanding of distributed data systems and modern data architectures
• Experience working in Agile/Scrum environments
• Solid knowledge of databases and web technologies
• Strong analytical, problem-solving, and troubleshooting skills
• Excellent communication skills and ability to work cross-functionally
Preferred Qualifications
• Experience with large-scale, enterprise data platforms
• Exposure to Master Data Management (MDM) concepts and tools
• Experience processing geospatial data
• Familiarity with AI/ML applications in data engineering workflows
• Proficiency in additional languages such as Java or .NET
Azure Databricks Developer
Brooksource
Fortune 50 Health Insurance Client
Overview
We are seeking a highly skilled Azure Databricks Developer to design, build, and optimize scalable data solutions within a cloud-based SaaS environment. This role is a key contributor to an enterprise-wide Master Data Management (MDM) platform that serves as the system of record across multiple business domains.
You will play a critical role in developing high-performance data ingestion pipelines, integrating diverse source systems, and enabling seamless data flow (ingress and egress) across a large-scale ecosystem. The platform supports millions of records—including customer and insurance profile data—and is central to organizational decision-making.
This is an excellent opportunity for a hands-on data engineer who thrives in a fast-paced, collaborative environment and enjoys working on high-impact, enterprise-scale data platforms.
Key Responsibilities
• Design and implement high-performance data ingestion pipelines using Azure Databricks and Apache Spark
• Build scalable, reusable frameworks for ingesting and processing large and complex datasets, including geospatial data
• Integrate multiple source systems into a centralized SaaS-based MDM platform
• Enable robust data ingress and egress capabilities across enterprise applications
• Develop and maintain APIs and system integrations for cross-platform data exchange
• Support and enhance Master Data Management (MDM) processes to ensure a consistent “golden record” across systems
• Deliver and present proofs of concept (POCs) to stakeholders for new technologies and solutions
• Implement and uphold quality assurance standards, including testing, debugging, and performance optimization
• Troubleshoot complex data and software issues, identifying root causes and implementing solutions
• Collaborate with cross-functional teams including product, engineering, and business stakeholders
• Apply automation and AI-driven techniques to improve efficiency and scalability
• Participate in Agile ceremonies (daily scrums, sprint planning, retrospectives)
Required Qualifications
• Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience)
• 3–5 years of experience in software or data engineering
• Hands-on experience with:
• Azure Databricks
• Python and PySpark
• C#
• API development and system integration
• Apache Kafka
• Strong understanding of distributed data systems and modern data architectures
• Experience working in Agile/Scrum environments
• Solid knowledge of databases and web technologies
• Strong analytical, problem-solving, and troubleshooting skills
• Excellent communication skills and ability to work cross-functionally
Preferred Qualifications
• Experience with large-scale, enterprise data platforms
• Exposure to Master Data Management (MDM) concepts and tools
• Experience processing geospatial data
• Familiarity with AI/ML applications in data engineering workflows
• Proficiency in additional languages such as Java or .NET






