

Vallum Associates
Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Developer with a contract length of "X months" and a pay rate of "$X/hour". It requires expertise in Microsoft Azure, Spark programming, Python, and data engineering, focusing on financial market applications and compliance protocols.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
December 19, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Cloud #Datasets #Azure #Scala #Data Pipeline #Microsoft Azure #Leadership #Java #PySpark #DevOps #Agile #Spark SQL #Data Processing #Docker #Batch #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Semantic Models #Deployment #Documentation #SQL (Structured Query Language) #Distributed Computing #Data Engineering #Kubernetes #Python #GDPR (General Data Protection Regulation) #Data Lake #Programming #Microsoft Power BI #Dataflow #GIT #Azure cloud #Data Accuracy #Strategy #Compliance #BI (Business Intelligence) #GitLab
Role description
The Role
The role will be integral to realizing the customerβs vision and strategy in transforming some of their critical application and data engineering components. As a global financial marketβs infrastructure and data provider, the customer keeps abreast of the latest cutting technologies enabling their core services and business requirements. The role is critical in this endeavour by the means of enabling the technical thought leadership and excellence required for the purpose.
Your responsibilities:
β’ Design, build, and optimise scalable data pipelines for batch and streaming workloads
β’ Develop and manage dataflows, and semantic models to support specific analytics related business requirements
β’ Implement complex transformations, aggregations, and joins ensuring performance, and reliability
β’ Implement and apply robust data validations, cleansing, and profiling techniques to ensure data accuracy and consistency across datasets
β’ Implement role-based access, data masking, and compliance protocols
β’ Performance tune and optimise jobs and workloads to reduce latency
β’ Work collaboratively with analysts and business stakeholders to translate requirements into technical solutions
β’ Create, maintain, and update documentation and internal knowledge repository
Your Profile
Essential skills/knowledge/experience:
β’ Experience of programming under Microsoft Azure Cloud Platform
β’ Experience of programming under Microsoft Fabric Platform
β’ Have knowledge of Spark Programming Ability to write Spark code for large scale data processing, including RDDs, Data Frames, and Spark SQL
β’ Python / Notebook programming
β’ PySpark programming
β’ Spark Streaming/batch processing
β’ Delta table Optimization
β’ Fabric spark jobs
β’ Java programming language, OOP knowledge
β’ Database knowledge, including Relational Database and Non-SQL database.
β’ Experience of using the tools: Gitlab, Python unit test, CICD pipeline.
β’ Good skill of troubleshooting
β’ Familiar with the Agile. Well communication.
β’ Good English listening and speaking for communicating requirements and development tasks/issues
β’ Hands-on experience with lake houses, dataflows, pipelines, and semantic models
β’ Ability to build ETL workflows
β’ Familiarity with time-series data, market feeds, transactional records, and risk metrics
β’ Familiarity with Git, DevOps pipelines, and automated deployment
β’ Strong communication skills with a collaborative mindset to work with and manage stakeholders
Desirable skills/knowledge/experience:
β’ Ability to prepare and process datasets for Power BI usage
β’ Experience with One Lake, Azure Data Lake, and distributed computing environments
β’ Understanding of financial regulations such as GDPR, SOX etc.
β’ Spark application performance tuning
β’ Knowledge of Docker / Kubernetes
The Role
The role will be integral to realizing the customerβs vision and strategy in transforming some of their critical application and data engineering components. As a global financial marketβs infrastructure and data provider, the customer keeps abreast of the latest cutting technologies enabling their core services and business requirements. The role is critical in this endeavour by the means of enabling the technical thought leadership and excellence required for the purpose.
Your responsibilities:
β’ Design, build, and optimise scalable data pipelines for batch and streaming workloads
β’ Develop and manage dataflows, and semantic models to support specific analytics related business requirements
β’ Implement complex transformations, aggregations, and joins ensuring performance, and reliability
β’ Implement and apply robust data validations, cleansing, and profiling techniques to ensure data accuracy and consistency across datasets
β’ Implement role-based access, data masking, and compliance protocols
β’ Performance tune and optimise jobs and workloads to reduce latency
β’ Work collaboratively with analysts and business stakeholders to translate requirements into technical solutions
β’ Create, maintain, and update documentation and internal knowledge repository
Your Profile
Essential skills/knowledge/experience:
β’ Experience of programming under Microsoft Azure Cloud Platform
β’ Experience of programming under Microsoft Fabric Platform
β’ Have knowledge of Spark Programming Ability to write Spark code for large scale data processing, including RDDs, Data Frames, and Spark SQL
β’ Python / Notebook programming
β’ PySpark programming
β’ Spark Streaming/batch processing
β’ Delta table Optimization
β’ Fabric spark jobs
β’ Java programming language, OOP knowledge
β’ Database knowledge, including Relational Database and Non-SQL database.
β’ Experience of using the tools: Gitlab, Python unit test, CICD pipeline.
β’ Good skill of troubleshooting
β’ Familiar with the Agile. Well communication.
β’ Good English listening and speaking for communicating requirements and development tasks/issues
β’ Hands-on experience with lake houses, dataflows, pipelines, and semantic models
β’ Ability to build ETL workflows
β’ Familiarity with time-series data, market feeds, transactional records, and risk metrics
β’ Familiarity with Git, DevOps pipelines, and automated deployment
β’ Strong communication skills with a collaborative mindset to work with and manage stakeholders
Desirable skills/knowledge/experience:
β’ Ability to prepare and process datasets for Power BI usage
β’ Experience with One Lake, Azure Data Lake, and distributed computing environments
β’ Understanding of financial regulations such as GDPR, SOX etc.
β’ Spark application performance tuning
β’ Knowledge of Docker / Kubernetes






