

Databricks Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer based in Minnesota, offering a 12-month contract at a rate of $58-70/hour. Requires 3+ years in Azure and Databricks, and 2+ years in Python, PySpark, SQL, and related technologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date discovered
August 13, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Minnesota, United States
-
π§ - Skills detailed
#Airflow #Azure Databricks #Data Engineering #Data Pipeline #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Azure #Databricks #GitHub #Python #PySpark #Data Quality #Spark (Apache Spark) #Spark SQL
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: Databricks Data Engineer
Location: Remote, must be based in Minnesota
Contract to Hire - 12 Months, then convert to full time
Rate Range: 58-70 depending on Experience level
Required Skills & Experience
β’ 3+ years of experience as a Data Engineer working in a heavy Azure environment
β’ 3+ years of experience building pipelines with Databricks
β’ 2+ years of experience in technologies related to their tech stack (Python, PySpark, SQL, GitHub, PowerBI, Airflow, Fabric)
Nice to Have Skills & Experience
β’ Healthcare experience
β’ Relevant certifications (Azure, Databricks etc)
Job Description
Insight Global is seeking a Sr. Azure Data Engineer with strong Databricks experience to join a large payor/provider on their provider product team. This person will be responsible to:
β’ Design, develop, and implement end-to-end data solutions using Azure Databricks.
β’ Convert current SQL to Python code in Databricks.
β’ Modify and maintain data pipelines.
β’ Write, test, and optimize PySpark and SQL scripts to transform and load high volumes of structured data.
β’ Update or maintain existing data pipelines in a production setting.
β’ Ensure data quality and integrity by implementing data validation and cleansing processes.
β’ Demonstrate strong verbal communication and critical thinking skills, working well within a team environment and not be