

SRMD Ltd
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "X months" and a pay rate of "$X/hour". Required skills include strong SQL, Python scripting, and geospatial data processing experience. Familiarity with Azure Data Factory and open government datasets is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Pandas #Matplotlib #Datasets #Data Pipeline #SQL (Structured Query Language) #Azure #Data Engineering #Anomaly Detection #Data Profiling #Scripting #Data Integration #Azure Data Factory #ADF (Azure Data Factory) #Data Quality #Spark (Apache Spark) #Spatial Data #MySQL #Data Processing #Data Ingestion #Automation #Python
Role description
Key Responsibilities:
• Develop and maintain data ingestion and export pipelines, including schema discovery, incremental loads, and multi-source data integration.
• Build transformation-heavy data pipelines covering data profiling, cleansing, standardization, conformance, and publishing.
• Use advanced SQL for data profiling, joins/merges, deduplication, anomaly detection, and performance tuning.
• Develop automation scripts and data quality checks using Python (Pandas, Polars, scikit-learn, matplotlib).
• Work with modern data platforms such as Spark and Azure Data Factory or equivalent code-based frameworks.
• Process and analyze geospatial datasets including vector, raster, GeoJSON, and shapefiles while managing coordinate reference systems.
• Apply geographic context to produce regional and national-level datasets and insights.
• Utilize publicly available datasets such as ONS open data (census boundaries, geographic lookups, deprivation indices, population estimates).
• Implement data quality rules for completeness, validity, and consistency with appropriate exception handling.
Key Skills:
• Strong SQL and MySQL expertise
• Python scripting for data engineering and analytics
• Experience with data transformation pipelines and modern data tooling
• Knowledge of geospatial data processing and spatial analysis
• Experience working with open government datasets (e.g., ONS)
Key Responsibilities:
• Develop and maintain data ingestion and export pipelines, including schema discovery, incremental loads, and multi-source data integration.
• Build transformation-heavy data pipelines covering data profiling, cleansing, standardization, conformance, and publishing.
• Use advanced SQL for data profiling, joins/merges, deduplication, anomaly detection, and performance tuning.
• Develop automation scripts and data quality checks using Python (Pandas, Polars, scikit-learn, matplotlib).
• Work with modern data platforms such as Spark and Azure Data Factory or equivalent code-based frameworks.
• Process and analyze geospatial datasets including vector, raster, GeoJSON, and shapefiles while managing coordinate reference systems.
• Apply geographic context to produce regional and national-level datasets and insights.
• Utilize publicly available datasets such as ONS open data (census boundaries, geographic lookups, deprivation indices, population estimates).
• Implement data quality rules for completeness, validity, and consistency with appropriate exception handling.
Key Skills:
• Strong SQL and MySQL expertise
• Python scripting for data engineering and analytics
• Experience with data transformation pipelines and modern data tooling
• Knowledge of geospatial data processing and spatial analysis
• Experience working with open government datasets (e.g., ONS)






