Azure Databricks Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Databricks Developer in Louisville, KY, on a contract basis. Requires 10+ years in data processing, 4+ years in Databricks and Python, and experience with Azure architecture and data pipelines.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 9, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Kentucky, United States
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Cloudera #Hadoop #NoSQL #Python #Informatica #Data Privacy #Cloud #Data Pipeline #Leadership #Data Governance #Data Quality #Big Data #Data Lake #Java #Data Management #Data Engineering #Metadata #Scala #Spark (Apache Spark) #Azure Databricks #Databricks #Azure #Agile #Data Processing
Role description
Role: Azure Databricks Developer Location: Louisville, KY (Day 1 Onsite) Contract Job Description: 1. The Senior Data Engineer will be responsible for the build of Enterprise Data platform. 1. Setting up the data pipelines that are scalable, robust and resilient and build pipelines to validate, ingest, normalize/enrich and business-specific processing of healthcare data. 1. Build Azure Data Lake leveraging Databricks technology to consolidate all data across the company and serve the data to various products and services. 1. The scope of this role will include working with engineering, product management, program management and operations teams in delivering pipeline platform, foundational and application-specific pipelines and building the Data Lake in collaboration with business and other teams. Responsibilities 1. Design, Develop, Operate and drive scalable and resilient data platform to address the business requirements 1. Drive technology and business transformation through the creation of the Azure Data Lake 1. Ensure industry best practices around data pipelines, metadata management, data quality, data governance and data privacy 1. Partner with Product Management and Business leaders to drive Agile delivery of both existing and new offerings; assist with the Leadership and collaboration with engineering organizations within Change to manage and optimize the portfolio Required Skills 1. 10+ years working experience in Data processing / ETL / Big data technologies like Informatica, Hadoop, Cloudera. 1. 4+ years working experience in Databricks (essential) and Python 1. Experience with Cloud / Azure architectural components 1. Experience building data pipelines and infrastructure 1. Deep understanding of Data warehousing concepts, reporting and Analytical concepts 1. Experience with Big Data tech stack, including Hadoop, Java, Spark, Scala, and Hive, NoSQL data stores Nice to Have skills 1. Experience in leading design and development of large systems 1. Demonstrates strong drive to learn and advocate for development best practices 1. Proven track record of building and delivering enterprise-class products 1. Full Stack Experience (End to End Development) 1. Crisp and effective executive communication skills, including significant experience presenting cross-functionally and across all levels