Vector Resourcing

Data Specialist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Specialist with a contract length of "unknown," offering a pay rate of "unknown," and is remote. Key skills include Azure Data Factory, SQL, automated testing, and data validation frameworks. Strong Azure ecosystem knowledge is required.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 23, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Synapse #Automation #Cloud #Spark (Apache Spark) #Scripting #Automated Testing #PySpark #REST (Representational State Transfer) #DevOps #Data Accuracy #SQL (Structured Query Language) #DataOps #Network Security #ADF (Azure Data Factory) #Azure Data Factory #Azure #Data Engineering #Data Pipeline #SQL Queries #Monitoring #Data Quality #Security #Storage #Debugging #Logging #Azure DevOps #KQL (Kusto Query Language)
Role description
Summary This is a hands-on DataOps / Data Quality Engineer role with a strong focus on building data validation frameworks and automated testing for Azure-based data platforms. The role also includes DataOps responsibilities, ensuring reliable, observable, and well-governed pipeline operations across Fabric Data Factory, Azure Data Factory and Synapse environments. Additionally, the engineer will take on Data Reliability Engineering (SRE) responsibilities. Key Responsibilities β€’ Build, maintain, or leverage open-source data validation frameworks to ensure data accuracy, schema integrity, and quality across ingestion and transformation pipelines β€’ Test and validate data pipelines and PySpark notebooks developed by Data Engineers, ensuring they meet quality, reliability, and validation standards β€’ Defining and standardizing monitoring, logging, alerting, and KPIs/SLAs across data platform to enable consistent measurement of data reliability. β€’ Identify and create Azure Monitor alert rules and develop KQL queries to extract metrics and logs from Azure Monitor/Log Analytics for reliability tracking and alerting. β€’ Write SQL queries and PowerShell (or another scripting language) to automate the execution of validation routines, verify pipeline outputs, and support end-to-end data quality workflows β€’ Collaborate with Data Engineering, Cloud, and Governance teams to embed standardized validation and reliability practices into their workflows β€’ Document validation rules, testing processes, operational guidelines, and data reliability best practices to ensure consistency across teams What We’re Looking For β€’ Strong background in data validation frameworks, automated testing, data verification logic, and quality enforcement β€’ Automation Experience for data validations, reconciliations and generating alerts. β€’ Experience with Azure Monitor, setting up Alert rules, building dashboards using data queried (KQL) from Log Analytics. β€’ Experience with Fabric Data Factory, Azure Data Factory, Synapse pipelines, and PySpark notebooks β€’ Hands-on experience calling REST/OData APIs for validating data. β€’ Experience writing SQL and scripts for programmatically doing data validations and reconciliation across systems. β€’ Strong understanding of the Azure ecosystem, including identity, network security, storage, and authentication models β€’ Working experience with Azure DevOps and CI/CD β€’ Strong debugging, incident resolution, and system reliability skills aligned to SRE β€’ Ability to work independently while collaborating effectively across Data Engineering, Cloud, Analytics, and Governance teams Beneficial Experience β€’ Experience in data space, with strong exposure to data testing, validations, and Data Reliability Engineering β€’ Experience defining and tracking data quality KPIs, operational KPIs, and SLAs to measure data reliability and performance β€’ Hands-on experience using Azure Monitor, Log Analytics, and writing KQL queries to collect monitoring data and define alert rules β€’ Experience writing SQL and PowerShell (or another scripting language) to automate data validation, reconciliation, and rule execution β€’ Exposure to data validation frameworks such as Great Expectations, Soda, or custom SQL/PySpark rule engines β€’ Experience validating pipelines and PySpark notebooks developed by data engineering teams across Fabric Data Factory, Azure Data Factory, and Synapse β€’ Experience defining and documenting validation rules, operational testing guidelines, and reliability processes for consistent team adoption