

Diagonal Matrix
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with strong Azure or AWS experience to design and maintain ETL pipelines and cloud analytics solutions. Contract length exceeds 6 months, with a pay rate of £70,000.00-£96,188.59, and requires expertise in data engineering, cloud data services, and data governance.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
437
-
🗓️ - Date
March 18, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
Remote
-
🧠 - Skills detailed
#Monitoring #Azure #JSON (JavaScript Object Notation) #Data Governance #Datasets #Azure Databricks #AWS (Amazon Web Services) #ADF (Azure Data Factory) #Data Framework #Data Quality #Redshift #XML (eXtensible Markup Language) #AWS Lambda #Scala #Amazon Redshift #Storage #Data Ingestion #Batch #Security #Cloud #Data Integration #Data Lifecycle #Azure Data Factory #Tableau #AWS Glue #CRM (Customer Relationship Management) #PySpark #SQL Queries #Data Lake #Snowflake #Spark SQL #Azure Event Hubs #Synapse #Python #"ETL (Extract #Transform #Load)" #AWS Kinesis #Databricks #Kafka (Apache Kafka) #Delta Lake #SQL (Structured Query Language) #BI (Business Intelligence) #Databases #Azure Synapse Analytics #Spark (Apache Spark) #Data Warehouse #Lambda (AWS Lambda) #ML (Machine Learning) #DevOps #Data Architecture #Data Engineering #SaaS (Software as a Service) #Metadata #Observability #Documentation #Deployment #Data Pipeline #Microsoft Power BI
Role description
Visa Sponsorship:Sponsorship is available for the right candidates. Intra-Company Transfer (ICT) visa applicants are considered.
Job Summary
We are looking for a highly skilled Data Engineer / ETL Engineer with strong experience across Azure or AWS cloud platforms (at least one cloud experience) to design, build, optimise, and maintain modern data platforms, scalable ETL/ELT pipelines, and cloud-native analytics solutions. This role is ideal for someone with hands-on expertise in data engineering, data integration, cloud data services, warehousing, orchestration, performance tuning, and modern analytics architectures.
The ideal candidate will have experience working with large-scale structured and unstructured data, building robust batch and real-time data pipelines, and enabling downstream reporting, analytics, machine learning, and business intelligence use cases. You will work closely with data architects, analysts, BI developers, DevOps engineers, product teams, and business stakeholders to deliver secure, scalable, and high-performing data solutions.
This is an excellent opportunity for a data professional who enjoys working across the full data lifecycle, from ingestion and transformation to modelling, governance, observability, and cloud deployment.
Key Responsibilities
Design, develop, and support end-to-end ETL/ELT pipelines on Azure or AWS
Build scalable and reusable data ingestion frameworks for batch, micro-batch, and real-time workloads
Develop and optimise cloud-based data solutions using Azure Data Factory, Azure Databricks, AWS Glue, AWS Lambda, AWS Step Functions, and related services
Create and manage data pipelines for ingesting data from APIs, databases, files, ERP/CRM platforms, SaaS applications, and streaming sources
Build and maintain data lakes, lakehouses, and cloud data warehouses
Perform data transformation and processing using Python, PySpark, SQL, Spark SQL, and distributed data frameworks
Design and implement data models for reporting, analytics, and operational consumption
Work with Azure Synapse Analytics, Snowflake, Amazon Redshift, Databricks SQL, and SQL-based warehouse platforms
Develop metadata-driven and parameterised ETL frameworks to improve scalability and reusability
Implement data quality checks, validation rules, reconciliation logic, and pipeline monitoring
Optimise performance of ETL jobs, Spark workloads, SQL queries, partitioning strategies, and storage formats
Build and maintain pipelines for incremental loading, CDC, SCD, deduplication, schema evolution, and historical tracking
Support real-time and event-driven architectures using tools such as Kafka, Azure Event Hubs, AWS Kinesis, and streaming pipelines
Work with file formats such as Parquet, Avro, ORC, JSON, CSV, XML, and Delta Lake
Collaborate with analytics and BI teams to prepare curated datasets for Power BI, Tableau, QuickSight, and other reporting tools
Implement data governance, security, lineage, and access control using cloud-native security and governance services
Support CI/CD pipelines, code promotion, automated deployment, and infrastructure-as-code practices
Ensure platforms are designed for high availability, resilience, scalability, and cost optimisation
Troubleshoot production issues, perform root cause analysis, and drive permanent fixes
Contribute to architecture discussions, best practices, coding standards, and technical documentation
Mentor junior engineers and promote strong engineering practices across the team
Job Types: Full-time, Permanent, Temporary, Fixed term contract
Pay: £70,000.00-£96,188.59 per year
Benefits:
Flexitime
Work from home
Work Location: Remote
Visa Sponsorship:Sponsorship is available for the right candidates. Intra-Company Transfer (ICT) visa applicants are considered.
Job Summary
We are looking for a highly skilled Data Engineer / ETL Engineer with strong experience across Azure or AWS cloud platforms (at least one cloud experience) to design, build, optimise, and maintain modern data platforms, scalable ETL/ELT pipelines, and cloud-native analytics solutions. This role is ideal for someone with hands-on expertise in data engineering, data integration, cloud data services, warehousing, orchestration, performance tuning, and modern analytics architectures.
The ideal candidate will have experience working with large-scale structured and unstructured data, building robust batch and real-time data pipelines, and enabling downstream reporting, analytics, machine learning, and business intelligence use cases. You will work closely with data architects, analysts, BI developers, DevOps engineers, product teams, and business stakeholders to deliver secure, scalable, and high-performing data solutions.
This is an excellent opportunity for a data professional who enjoys working across the full data lifecycle, from ingestion and transformation to modelling, governance, observability, and cloud deployment.
Key Responsibilities
Design, develop, and support end-to-end ETL/ELT pipelines on Azure or AWS
Build scalable and reusable data ingestion frameworks for batch, micro-batch, and real-time workloads
Develop and optimise cloud-based data solutions using Azure Data Factory, Azure Databricks, AWS Glue, AWS Lambda, AWS Step Functions, and related services
Create and manage data pipelines for ingesting data from APIs, databases, files, ERP/CRM platforms, SaaS applications, and streaming sources
Build and maintain data lakes, lakehouses, and cloud data warehouses
Perform data transformation and processing using Python, PySpark, SQL, Spark SQL, and distributed data frameworks
Design and implement data models for reporting, analytics, and operational consumption
Work with Azure Synapse Analytics, Snowflake, Amazon Redshift, Databricks SQL, and SQL-based warehouse platforms
Develop metadata-driven and parameterised ETL frameworks to improve scalability and reusability
Implement data quality checks, validation rules, reconciliation logic, and pipeline monitoring
Optimise performance of ETL jobs, Spark workloads, SQL queries, partitioning strategies, and storage formats
Build and maintain pipelines for incremental loading, CDC, SCD, deduplication, schema evolution, and historical tracking
Support real-time and event-driven architectures using tools such as Kafka, Azure Event Hubs, AWS Kinesis, and streaming pipelines
Work with file formats such as Parquet, Avro, ORC, JSON, CSV, XML, and Delta Lake
Collaborate with analytics and BI teams to prepare curated datasets for Power BI, Tableau, QuickSight, and other reporting tools
Implement data governance, security, lineage, and access control using cloud-native security and governance services
Support CI/CD pipelines, code promotion, automated deployment, and infrastructure-as-code practices
Ensure platforms are designed for high availability, resilience, scalability, and cost optimisation
Troubleshoot production issues, perform root cause analysis, and drive permanent fixes
Contribute to architecture discussions, best practices, coding standards, and technical documentation
Mentor junior engineers and promote strong engineering practices across the team
Job Types: Full-time, Permanent, Temporary, Fixed term contract
Pay: £70,000.00-£96,188.59 per year
Benefits:
Flexitime
Work from home
Work Location: Remote






