hackajob

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on an 18-month fixed-term contract, offering competitive pay. Key skills include SQL, Databricks, Python, and ETL/ELT pipeline development. Experience with Delta Lake and data governance is essential. Remote work location available.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 17, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Morley, England, United Kingdom
-
🧠 - Skills detailed
#Data Governance #Cloud #Scala #Documentation #Batch #Databricks #Data Quality #Delta Lake #"ETL (Extract #Transform #Load)" #Python #Spark (Apache Spark) #SQL (Structured Query Language) #Data Pipeline #Monitoring #Metadata #Data Engineering
Role description
hackajob is collaborating with Evri to connect them with exceptional professionals for this role. Data Engineer – Build Scalable Data Products That Power the Business 18 month Fixed Term Contract Want to turn raw data into trusted, analytics-ready insight? If you enjoy building robust pipelines, working with modern cloud data platforms, and delivering data products that genuinely enable better decisions, this is your chance to make a real impact. As a Data Engineer, you’ll play a key role in designing, building, and maintaining scalable data pipelines on our Databricks Lakehouse platform. You’ll work closely with analysts, architects, and stakeholders to transform raw data into governed, high-quality assets that power reporting, analytics, and insight across the organisation. What You’ll Be Doing You’ll be responsible for building and maintaining reliable ETL/ELT pipelines, applying strong engineering standards and data governance principles throughout. From ingestion to transformation and optimisation, you’ll help shape how data flows through bronze, silver, and gold layers β€” ensuring it’s performant, maintainable, and trusted. You’ll also contribute to shared standards, reusable components, and continuous improvement across the data engineering function. Responsibilities Build and maintain scalable ETL/ELT pipelines using Databricks (SQL, Python, Spark). Develop and optimise Delta Lake tables across medallion layers. Implement best practices for performance, modularity, and reusability. Support CI/CD processes for notebooks, jobs, and data workflows. Ingest and process batch and streaming data from a range of source systems. Apply data quality checks, validation rules, and monitoring. Maintain documentation, lineage, and metadata to support governance and auditability. Collaborate with analysts and stakeholders to translate requirements into data solutions. Interested? Here’s What You’ll Need to Be Successful Hands-on experience as a Data