hackajob

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on an 18-month fixed-term contract, offering a competitive pay rate. Key skills include SQL, Databricks, Python, and ETL processes. Experience with data quality and leadership is essential. Remote work is available.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 11, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Databricks #Data Quality #Data Design #Data Engineering #Security #Scala #"ETL (Extract #Transform #Load)" #Monitoring #Observability #Python #SQL (Structured Query Language) #Leadership #Data Pipeline #Delta Lake
Role description
hackajob is collaborating with Evri to connect them with exceptional professionals for this role. Senior Data Engineer – Design the Future of Our Data Platform - 18 month Fixed Term Contract Ready to lead, influence, and build at scale? If you’re an experienced Data Engineer who thrives on solving complex problems, shaping architecture, and raising engineering standards, this is a role where your impact will be felt across the entire business. As a Senior Data Engineer, you’ll design and deliver high-quality, governed data products on our Databricks Lakehouse platform. Blending hands-on engineering with architectural thinking, you’ll help define how data is built, optimised, and consumed - today and in the future. What You’ll Be Doing You’ll take ownership of complex data pipelines and platform components, ensuring solutions are scalable, maintainable, and aligned with enterprise governance. Working closely with architects, analysts, and business stakeholders, you’ll translate requirements into robust data designs, while also mentoring junior engineers and contributing to shared standards, frameworks, and best practices. This role is central to how we evolve our data platform β€” from ingestion and modelling through to quality, observability, and cost-efficient performance. Responsibilities Design and implement complex ETL/ELT pipelines using Databricks (Python, SQL, DLT). Build and optimise Delta Lake tables with effective partitioning and performance strategies. Contribute to Lakehouse architecture design, ingestion patterns, and data product boundaries. Ensure solutions align with governance, security, and lineage standards (Unity Catalog). Implement automated data quality testing, monitoring, and observability. Optimise cluster usage, job orchestration, and cost efficiency. Provide technical leadership and mentoring to other engineers. Drive continuous improvement through reusable components, frameworks, and innovation. Interested? Here’s What You&rsquo