

Azure Databricks Developer (Local Consultant Required)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Developer in Louisville, KY, requiring 10+ years in data processing and 4+ years in Databricks and Python. Contract length is unspecified, with a focus on building scalable data pipelines for healthcare data.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 31, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Louisville, KY
-
π§ - Skills detailed
#Leadership #Mathematics #Azure #Python #Big Data #Azure Databricks #Spark (Apache Spark) #Scala #Cloudera #Informatica #Data Quality #Computer Science #NoSQL #Data Pipeline #Cloud #Databricks #Java #"ETL (Extract #Transform #Load)" #Data Engineering #Hadoop #Metadata #Agile #Data Management #Data Processing #Data Governance #Data Privacy #Data Lake
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Location: Louisville, KY (Day 1 Onsite)
Job Description:
1. The Senior Data Engineer will be responsible for the build of Enterprise Data platform.
1. Setting up the data pipelines that are scalable, robust and resilient and build pipelines to validate, ingest, normalize/enrich and business-specific processing of healthcare data.
1. Build Azure Data Lake leveraging Databricks technology to consolidate all data across the company and serve the data to various products and services.
1. The scope of this role will include working with engineering, product management, program management and operations teams in delivering pipeline platform, foundational and application-specific pipelines and building the Data Lake in collaboration with business and other teams.
Required Skills
1. 10+ years working experience in Data processing / ETL / Big data technologies like Informatica, Hadoop, Cloudera
1. 4+ years working experience in Databricks (essential) and Python
1. Experience with Cloud / Azure architectural components
1. Experience building data pipelines and infrastructure
1. Deep understanding of Data warehousing concepts, reporting and Analytical concepts
1. Experience with Big Data tech stack, including Hadoop, Java, Spark, Scala, and Hive, NoSQL data stores
1. Bachelorβs degree in Mathematics, Physical Sciences, Engineering or Computer Science
Responsibilities
1. Design, Develop, Operate and drive scalable and resilient data platform to address the business requirements
1. Drive technology and business transformation through the creation of the Azure Data Lake
1. Ensure industry best practices around data pipelines, metadata management, data quality, data governance and data privacy
1. Partner with Product Management and Business leaders to drive Agile delivery of both existing and new offerings; assist with the Leadership and collaboration with engineering organizations within Change to manage and optimize the portfolio
Nice to Have skills
1. Experience in leading design and development of large systems
1. Demonstrates strong drive to learn and advocate for development best practices
1. Proven track record of building and delivering enterprise-class products
1. Full Stack Experience (End to End Development)
1. Crisp and effective executive communication skills, including significant experience presenting cross-functionally and across all levels