

Lead Data Engineer (Databricks) - W2 Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer (Databricks) with a 12-month contract, offering a competitive pay rate. Key skills include Spark, Python, Azure Databricks, and ETL processes. Requires 15+ years of big data experience and leadership in engineering teams.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Azure Cosmos DB #Azure Data Factory #Data Engineering #Synapse #Azure ADLS (Azure Data Lake Storage) #"ETL (Extract #Transform #Load)" #Database Management #Leadership #Databases #Azure SQL Database #Data Processing #Azure Databricks #Scala #Data Architecture #Spark (Apache Spark) #Programming #Data Lake #Big Data #Cloud #Azure #Azure SQL #ADLS (Azure Data Lake Storage) #Azure Blob Storage #Python #Data Pipeline #PySpark #SQL (Structured Query Language) #Data Management #Databricks #Storage #Data Modeling #Data Integration #ADF (Azure Data Factory)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description:
Senior Data Engineering Leader
We are seeking a highly experienced Senior Data Engineering Leader with 15+ years of experience in big data to take a lead onshore and offshore engineering teams for a major transformation program as a manufacturing leader. This role is 60-70% hands-on development and 30-40% team leadership and stakeholder management.
The candidate must have extensive experience and expert skills in the following areas to efficiently lead: \\
β’ Programming Skills: Must have strong hands-on Spark, Python, PySpark, and SQL expertise.
β’ Big Data and Analytics: Knowledge of big data technologies like Azure Databricks and Synapse.
β’ Cloud Data Engineering concepts: Must demonstrate knowledge of Medallion Architecture and common ETL patterns, including ingestion frameworks.
β’ Performance tuning techniques and best practices: Understanding of performance analysis and system architecture is essential.
β’ Cloud data platform: Preferably MSFT Fabric, Azure Synapse, Azure Databricks, or any other cloud data platform.
β’ Data modeling skills: Strong skills and knowledge of dimensional modeling, semantic modeling, and standard data modeling patterns used in analytical systems.
β’ Data Management and Storage: Proficiency with Azure SQL Database, Azure Data Lake Storage, Azure Cosmos DB, Azure Blob Storage, etc.
β’ Data Integration and ETL: Extensive experience with Azure Data Factory for data integration and ETL processes.
β’ Analytical Skills: Strong analytical and problem-solving skills.
β’ Problem-Solving & Technical Leadership Skills: Ability to identify, design, and implement improvements that drive optimal performance.
β’ Leadership & Collaboration: Experience leading onshore and offshore teams, fostering collaboration, and driving high-performance engineering culture.
β’ Stakeholder Management: Strong analytical and communication skills, with experience working closely with business and technical stakeholders to align on requirements.
Responsibilities:
β’ Lead onshore and offshore data engineer team, provide expert guidance and collaborate with business stakeholders.
β’ Design and Build Data Pipelines: Develop and manage modern data pipelines and data streams using PySpark as well as data factories and data pipelines.
β’ Database Management: Develop and maintain databases, data systems, and processing systems.
β’ Data Transformation: Transform complex raw data into actionable business insights using PySpark.
β’ Technical Guidance: Collaborate with stakeholders and teams to assist with data-related technical issues.
β’ Data Architecture: Ensure data architecture supports business requirements and scalability.
β’ Big Data Solutions: Utilize Databricks or Synapse for big data processing and analytics.
β’ Process Improvements: Identify, design, and implement process improvements, such as automating manual processes and optimizing data delivery.