

TechNET IT Recruitment Ltd
Azure Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer on a contract basis, remote (UK-based), paying £500.00 per day outside IR35. Key skills include Azure Databricks, Azure Data Factory, and strong Python. Experience in pharmaceutical or life sciences is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
496
-
🗓️ - Date
April 1, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#ADF (Azure Data Factory) #Snowflake #Databases #SQL Server #DevOps #Azure Data Factory #Spark SQL #Azure DevOps #IAM (Identity and Access Management) #PySpark #Spark (Apache Spark) #Triggers #Delta Lake #Data Lake #Data Engineering #Storage #Vault #ML (Machine Learning) #GIT #FastAPI #Python #Data Quality #Data Pipeline #ADLS (Azure Data Lake Storage) #"ETL (Extract #Transform #Load)" #Azure Databricks #Data Science #Databricks #Scripting #Scala #Azure ADLS (Azure Data Lake Storage) #SQL (Structured Query Language) #Automation #Datasets #Version Control #Azure
Role description
Azure Data Engineer (Contract)
Location: Remote (UK-based)
Rate: £500.00 per day Outside IR35
Start Date: ASAP
We are working with a global pharmaceutical and life sciences organisation seeking an experienced Azure Data Engineer to join on a contract basis. This is a fantastic opportunity to contribute to large-scale, data-driven initiatives within a highly regulated and impactful industry.
The Role
You will play a key role in designing, building, and optimising scalable data solutions on Azure. Working within a collaborative data team, you’ll help deliver robust data pipelines and architectures that support analytics, research, and business-critical decision-making.
Key Responsibilities
• Design and develop scalable data pipelines using Azure technologies
• Build and optimise ETL/ELT processes for large datasets
• Implement data solutions using Azure Databricks and Delta Lake
• Orchestrate workflows using Azure Data Factory
• Develop and maintain data models to support reporting and analytics
• Ensure data quality, performance optimisation, and best practices
• Collaborate with cross-functional teams including data scientists and analysts
Essential Skills & Experience
• Strong hands-on experience with Azure Databricks (PySpark, Spark SQL, Delta Lake)
• Proven expertise in Azure Data Factory (ADF) including pipeline orchestration, triggers, and linked services
• Experience working with Azure Data Lake Storage Gen2 (ADLS Gen2) and layered architecture (Bronze / Silver / Gold)
• Strong Python skills for data transformation, scripting, and automation
• Solid experience with SQL Server or other relational databases, including querying, stored procedures, and performance tuning
• Strong understanding of data modelling, including Star and Snowflake schemas
Desirable Skills
• Experience with FastAPI for lightweight data or ML APIs
• Familiarity with Azure DevOps (CI/CD pipelines, release management)
• Experience with Git/version control and collaborative development workflows
• Knowledge of Azure fundamentals (IAM, Key Vault, networking, resource management)
Azure Data Engineer (Contract)
Location: Remote (UK-based)
Rate: £500.00 per day Outside IR35
Start Date: ASAP
We are working with a global pharmaceutical and life sciences organisation seeking an experienced Azure Data Engineer to join on a contract basis. This is a fantastic opportunity to contribute to large-scale, data-driven initiatives within a highly regulated and impactful industry.
The Role
You will play a key role in designing, building, and optimising scalable data solutions on Azure. Working within a collaborative data team, you’ll help deliver robust data pipelines and architectures that support analytics, research, and business-critical decision-making.
Key Responsibilities
• Design and develop scalable data pipelines using Azure technologies
• Build and optimise ETL/ELT processes for large datasets
• Implement data solutions using Azure Databricks and Delta Lake
• Orchestrate workflows using Azure Data Factory
• Develop and maintain data models to support reporting and analytics
• Ensure data quality, performance optimisation, and best practices
• Collaborate with cross-functional teams including data scientists and analysts
Essential Skills & Experience
• Strong hands-on experience with Azure Databricks (PySpark, Spark SQL, Delta Lake)
• Proven expertise in Azure Data Factory (ADF) including pipeline orchestration, triggers, and linked services
• Experience working with Azure Data Lake Storage Gen2 (ADLS Gen2) and layered architecture (Bronze / Silver / Gold)
• Strong Python skills for data transformation, scripting, and automation
• Solid experience with SQL Server or other relational databases, including querying, stored procedures, and performance tuning
• Strong understanding of data modelling, including Star and Snowflake schemas
Desirable Skills
• Experience with FastAPI for lightweight data or ML APIs
• Familiarity with Azure DevOps (CI/CD pipelines, release management)
• Experience with Git/version control and collaborative development workflows
• Knowledge of Azure fundamentals (IAM, Key Vault, networking, resource management)






