Jobs via Dice

Data Bricks Engineer | Minneapolis, MN | Contract

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Bricks Engineer in Minneapolis, MN, on a long-term contract. Key skills include Azure Cloud, Databricks workflows, Python, SQL, and ETL experience. Strong knowledge of Delta Lake architecture and data governance is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 10, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Minneapolis, MN
-
🧠 - Skills detailed
#Shell Scripting #Cloud #"ACID (Atomicity #Consistency #Isolation #Durability)" #ADF (Azure Data Factory) #Delta Lake #Databricks #ADLS (Azure Data Lake Storage) #Azure ADLS (Azure Data Lake Storage) #Clustering #PySpark #Python #Data Security #Data Lake #Storage #Security #Azure cloud #Data Pipeline #Microsoft Power BI #Terraform #Synapse #Oracle #Azure Data Factory #Azure #BI (Business Intelligence) #Data Integration #MySQL #Deployment #Snowflake #Spark (Apache Spark) #GitHub #Data Bricks #Scripting #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Governance #Migration #Informatica #Jenkins #Scala #Azure Synapse Analytics
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Healthcare Triangle Inc, is seeking the following. Apply via Dice today! Hi , Greetings from Healthcare Triangle !! We do have opening for our client, Role : Data Bricks Engineer Location : Minneapolis, MN Duration : Long-term Contract About Role : Azure Cloud, Databricks workflows, DLT, Notebooks, Python, SQL, PySpark, ADLS Gen2, ADF, Github, PL/SQL, ORACLE, ETL experience, Understanding of Delta Lake architecture, CDC patterns, and Lakehouse. Job Description : β€’ Ability to design and orchestrate data pipelines using Databricks Workflows and DLT and strong Understanding of Medallion Architecture. β€’ Expertise in developing Databricks notebooks for scalable solutions using Python, SQL, and PySpark. β€’ Understanding of Delta Lake architecture, CDC patterns, and Lakehouse. β€’ Strong understanding of Delta table key features like ACID-compliant Delta tables, time travel, schema enforcement, Deep and shallow Clones. β€’ Performance tuning using Liquid clustering, partitioning, Z-ordering, and data skipping in delta tables. β€’ Knowledge on data governance (unity catalog),Data Security (RBAC, Fine grain access control), Data Sharing (Delta Sharing). β€’ Proficient in working with Azure Data Lake Storage Gen2 (ADLS Gen2),Azure Data Factory (ADF) and terraform for provisioning and management of Azure resources. β€’ Knowledge on dealing with Spark streaming and Auto loaders in Data bricks. β€’ Strong experience in analyzing and understanding legacy Informatica ETL workflows, including mappings, transformations, and data flow logic, to support seamless migration to Databricks-based data pipelines. β€’ Hands-on experience in implementing CI/CD pipelines using Jenkins to automate deployment of Databricks notebooks, jobs, and data workflows. β€’ Integrating GitHub with Databricks Repos to enable seamless code synchronization, change tracking, and automated deployment workflows. β€’ Knowledge of Snowflake, Oracle, MySQL, and Shell scripting for diverse data integration. β€’ Knowledge of Power BI and Azure Synapse Analytics for data analytics dashboards and reports.