

Premier Group Recruitment
Lead Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer on a 5-month contract, remote, with a pay rate of "unknown." Requires 7+ years in data engineering, strong Azure Databricks expertise, and experience modernizing legacy systems, preferably in financial services.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 30, 2026
π - Duration
3 to 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#ADF (Azure Data Factory) #Python #Data Quality #Data Pipeline #Consulting #ADLS (Azure Data Lake Storage) #SSIS (SQL Server Integration Services) #Leadership #MS SSRS (Microsoft SQL Server Reporting Services) #Agile #Delta Lake #AI (Artificial Intelligence) #Code Reviews #Azure Databricks #Spark (Apache Spark) #Data Engineering #"ETL (Extract #Transform #Load)" #Security #Scala #PySpark #Azure #SQL (Structured Query Language) #Vault #Observability #SSRS (SQL Server Reporting Services) #Databricks #Terraform
Role description
Lead Data Engineer (Contract)
5-Month Engagement | Extension Possible
Location: Remote
A technology consulting firm is seeking a Lead Data Engineer to join a major data and platform modernization program for a U.S. based financial lender that specializes in the agricultural industry.
This is a hands-on leadership role where youβll own the architecture and delivery of a modern Azure-based data platform, while managing a small team of engineers.
The Opportunity
Youβll lead the design and build of a production-grade Azure Databricks lakehouse, replacing legacy reporting and ETL systems, and delivering scalable, governed data pipelines across multiple workstreams.
This is a player/coach role β ideal for someone who enjoys writing production code, setting engineering standards, and leading from the front.
What Youβll Be Doing
β’ Own end-to-end technical architecture across multiple data workstreams
β’ Build and scale an Azure Databricks lakehouse (Delta Lake, Unity Catalog, medallion architecture)
β’ Migrate legacy SSRS/SSIS logic into modern data pipelines
β’ Design ingestion pipelines for external platform data (parquet-based)
β’ Implement secure data-sharing frameworks for downstream partners
β’ Establish best practices across CI/CD, testing, observability, and data quality
β’ Lead code reviews, mentor engineers, and set engineering standards
β’ Partner with stakeholders across security, infrastructure, and business teams
β’ Leverage AI coding tools (e.g., Copilot, Cursor, etc.) to accelerate development responsibly
What Weβre Looking For
β’ 7+ years in data engineering, including leadership experience
β’ Strong expertise in Azure Databricks (Spark, PySpark, Delta Lake, Unity Catalog)
β’ Experience modernizing legacy data systems (SSRS/SSIS)
β’ Deep knowledge of the Azure ecosystem (ADLS, ADF, Key Vault, networking)
β’ Advanced SQL and Python skills with strong software engineering fundamentals
β’ Experience leading teams in an agile, client-facing environment
β’ Familiarity with AI-assisted development tools and best practices
Nice to Have
β’ Background in financial services or lending platforms
β’ Experience with core systems (e.g., Loan Management Systems or banking platforms)
β’ Exposure to data-sharing frameworks or partner data exchange
β’ Databricks certifications
β’ Experience with infrastructure-as-code (Terraform, Bicep)
Lead Data Engineer (Contract)
5-Month Engagement | Extension Possible
Location: Remote
A technology consulting firm is seeking a Lead Data Engineer to join a major data and platform modernization program for a U.S. based financial lender that specializes in the agricultural industry.
This is a hands-on leadership role where youβll own the architecture and delivery of a modern Azure-based data platform, while managing a small team of engineers.
The Opportunity
Youβll lead the design and build of a production-grade Azure Databricks lakehouse, replacing legacy reporting and ETL systems, and delivering scalable, governed data pipelines across multiple workstreams.
This is a player/coach role β ideal for someone who enjoys writing production code, setting engineering standards, and leading from the front.
What Youβll Be Doing
β’ Own end-to-end technical architecture across multiple data workstreams
β’ Build and scale an Azure Databricks lakehouse (Delta Lake, Unity Catalog, medallion architecture)
β’ Migrate legacy SSRS/SSIS logic into modern data pipelines
β’ Design ingestion pipelines for external platform data (parquet-based)
β’ Implement secure data-sharing frameworks for downstream partners
β’ Establish best practices across CI/CD, testing, observability, and data quality
β’ Lead code reviews, mentor engineers, and set engineering standards
β’ Partner with stakeholders across security, infrastructure, and business teams
β’ Leverage AI coding tools (e.g., Copilot, Cursor, etc.) to accelerate development responsibly
What Weβre Looking For
β’ 7+ years in data engineering, including leadership experience
β’ Strong expertise in Azure Databricks (Spark, PySpark, Delta Lake, Unity Catalog)
β’ Experience modernizing legacy data systems (SSRS/SSIS)
β’ Deep knowledge of the Azure ecosystem (ADLS, ADF, Key Vault, networking)
β’ Advanced SQL and Python skills with strong software engineering fundamentals
β’ Experience leading teams in an agile, client-facing environment
β’ Familiarity with AI-assisted development tools and best practices
Nice to Have
β’ Background in financial services or lending platforms
β’ Experience with core systems (e.g., Loan Management Systems or banking platforms)
β’ Exposure to data-sharing frameworks or partner data exchange
β’ Databricks certifications
β’ Experience with infrastructure-as-code (Terraform, Bicep)






