

LEAD AZURE DATA ENGINEER-W2 ONLY
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Azure Data Engineer based in Denver, CO, hybrid, with a contract duration of 6+ months. Required skills include Databricks, Spark, and Python. Extensive data engineering experience and cloud platform expertise, preferably Azure, are essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 7, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Lakehouse #Spark (Apache Spark) #Cloud #Data Science #Strategy #Databricks #Data Quality #Monitoring #Data Lake #Python #Data Architecture #Data Pipeline #Data Engineering #Azure #"ETL (Extract #Transform #Load)" #Programming #Datasets #Scala #ADF (Azure Data Factory)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: Lead/Senior Data Engineer
Location: Denver, CO - Hybrid
Duration: 6+ months with possible extension
Interviews: (4 Rounds): All the interviews within one week.
Main skill: Databricks, Spark, Python
Key deliverables for the person coming into this role:
1. Enhance, performance tune and maintain Python based notebooks for complex Calc Engine logic, Spark.
1. Build, Manage, enhance ETL code in Databricks, ADF pipelines.
1. Design data pipelines from various systems into the Data Lakehouse.
Position Summary:
We are seeking a Data Engineer to join our dynamic team. The ideal candidate will have a robust background in data engineering, with a proven track record of designing and implementing efficient data pipelines and systems. This role will play a significant part in transforming and optimizing our data architecture to support our data-driven strategy.
Key Responsibilities:
β’ Develop and maintain scalable data pipelines and systems using Python, Spark, and Databricks to process and manage large-scale datasets.
β’ Design and implement unified data models with enterprise-wide consistency in key naming conventions, data types, and transformation functions.
β’ Optimize operations in Spark for handling large, unstructured, and nested datasets by implementing efficient data structuring and processing techniques.
β’ Collaborate with data science and analytics teams to create and refine data processes that enhance decision-making and align with business goals.
β’ Lead initiatives to streamline and enhance data quality monitoring systems.
Qualifications:
β’ Extensive experience as a Data Engineer, with a focus on Python programming in Spark/Databricks environments.
β’ Proven expertise in designing and optimizing large-scale data pipelines, ETL processes, and data lakes.
β’ Strong background in cloud platforms, ideally Azure, with hands-on experience managing cloud-based data infrastructure.
β’ Demonstrated ability to convert legacy systems into modern Spark/Databricks platforms, improving efficiency and performance.
β’ Excellent problem-solving skills with the ability to troubleshoot and resolve complex data issues.
β’ Strong communication and collaboration skills, with the ability to work effectively in a remote team environment.