

Gazelle Global
Azure Data Support Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Support Engineer with a contract length of "X months" and a pay rate of "X per hour". Requires 10+ years in data engineering, strong Azure Synapse and PySpark skills, and SQL performance tuning experience.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
May 2, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
West Midlands, England, United Kingdom
-
π§ - Skills detailed
#Storage #Triggers #Data Modeling #Data Engineering #Computer Science #Python #Documentation #Spark (Apache Spark) #GIT #Delta Lake #Scrum #SQL (Structured Query Language) #Data Transformations #Azure SQL #Azure DevOps #Synapse #Azure #Databricks #DevOps #Datasets #Data Processing #Data Architecture #PySpark #"ETL (Extract #Transform #Load)" #Agile #Data Lifecycle
Role description
Role description:
β’ Strong Azure Synapse engineering
β’ Strong PySpark development
β’ Azure SQL DB stabilisation including SQL performance tuning
Seniority signals:
β’ Proven independent operator
β’ Able to deliver at pace with minimal oversight
β’ Can diagnose and permanently fix BAU issues rather than relying on repeated manual intervention
β’ Specific Synapse/PySpark examples owned through production support
β’ SQL performance tuning examples
β’ evidence of independent delivery.
Key skills/knowledge/experience:
β’ BSc minimum, MSc or PhD in a STEM field (e.g., Computer Science, ,
β’ 10+ years of professional experience in data Engineer or a related field, with a proven track record of delivering impactful solutions.
β’ SQL Pools, serverless SQL, Spark Pools
β’ Strong SQL, performance tuning, query optimization
β’ Data modeling & warehouse concepts (Kimball/Inmon)
β’ Pipelines, triggers, linked services, integration runtime
β’ Data flows & orchestration of large-scale ETL/ELT workloads
β’ Distributed data processing, partitioning strategies, Spark optimization
β’ Data transformations, Delta Lake, and notebook-based development
β’ Hierarchical namespace, folder structure design, ACLs, RBAC
β’ Working with large-scale datasets and optimized storage formats
β’ Strong understanding of ETL/ELT frameworks, data lifecycle, and data architecture.
β’ Experience with Azure DevOps, CI/CD, Git branching strategies.
β’ Proficiency in SQL and Python.
β’ Knowledge of Databricks (added advantage).
β’ Experience in Agile/Scrum environments.
β’ Excellent problem-solving, communication, and documentation skills.
Good to Have
β’ Domain knowledge. Understanding of the water industry.
Person specification: I.e., negotiating, client facing, communication, assertive, team leading/team member skills, supportive.
β’ Collaborate with customers and stakeholders.
β’ Grow your career, while being exposed to new technologies.
β’ Lead projects and inspire both colleagues and stakeholders.
β’ Mentor junior employees using your expertise
Role description:
β’ Strong Azure Synapse engineering
β’ Strong PySpark development
β’ Azure SQL DB stabilisation including SQL performance tuning
Seniority signals:
β’ Proven independent operator
β’ Able to deliver at pace with minimal oversight
β’ Can diagnose and permanently fix BAU issues rather than relying on repeated manual intervention
β’ Specific Synapse/PySpark examples owned through production support
β’ SQL performance tuning examples
β’ evidence of independent delivery.
Key skills/knowledge/experience:
β’ BSc minimum, MSc or PhD in a STEM field (e.g., Computer Science, ,
β’ 10+ years of professional experience in data Engineer or a related field, with a proven track record of delivering impactful solutions.
β’ SQL Pools, serverless SQL, Spark Pools
β’ Strong SQL, performance tuning, query optimization
β’ Data modeling & warehouse concepts (Kimball/Inmon)
β’ Pipelines, triggers, linked services, integration runtime
β’ Data flows & orchestration of large-scale ETL/ELT workloads
β’ Distributed data processing, partitioning strategies, Spark optimization
β’ Data transformations, Delta Lake, and notebook-based development
β’ Hierarchical namespace, folder structure design, ACLs, RBAC
β’ Working with large-scale datasets and optimized storage formats
β’ Strong understanding of ETL/ELT frameworks, data lifecycle, and data architecture.
β’ Experience with Azure DevOps, CI/CD, Git branching strategies.
β’ Proficiency in SQL and Python.
β’ Knowledge of Databricks (added advantage).
β’ Experience in Agile/Scrum environments.
β’ Excellent problem-solving, communication, and documentation skills.
Good to Have
β’ Domain knowledge. Understanding of the water industry.
Person specification: I.e., negotiating, client facing, communication, assertive, team leading/team member skills, supportive.
β’ Collaborate with customers and stakeholders.
β’ Grow your career, while being exposed to new technologies.
β’ Lead projects and inspire both colleagues and stakeholders.
β’ Mentor junior employees using your expertise






