

Xcede
Senior Algorithm Engineer (Python/Spark–Distributed Processing)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Algorithm Engineer (Python/Spark) focused on distributed data processing, remote work in the UK/Belgium/Netherlands/Germany, with a contract starting ASAP. Key skills include strong Python, Spark, and experience in large-scale data processing.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Great Work, England, United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #NumPy #Databricks #Data Processing #ML (Machine Learning) #Cloud #Azure #Pandas #PySpark #Datasets #Microservices #Scala #Batch #SaaS (Software as a Service) #Spark (Apache Spark) #Data Science #Forecasting #IoT (Internet of Things) #AWS (Amazon Web Services) #Python #Leadership #SQL (Structured Query Language)
Role description
Senior Algorithm Engineer (Python / Spark – Distributed Data Processing)
Location: UK (O/IR35) / Belgium / Netherlands / Germany (B2B)
Working model: Remote
Start: ASAP
Senior Algorithm Engineer (Python / Spark – Distributed Data Processing)
We’re hiring a Senior Algorithm Engineer to join a data-intensive SaaS platform operating in a complex, regulated industry. This is a hands-on senior IC role focused on building and optimising distributed data pipelines that power pricing, forecasting and billing calculations at scale. - This is not an ML / Data Science / GenAI role
What you’ll be doing
• Design, build and deploy algorithms/data models supporting pricing, forecasting and optimisation use cases in production
• Develop and optimise distributed Spark / PySpark batch pipelines for large-scale data processing
• Write production-grade Python workflows implementing complex, explainable business logic
• Work with Databricks for job execution, orchestration and optimisation
• Improve pipeline performance, reliability and cost efficiency across high-volume workloads
• Collaborate with engineers and domain specialists to translate requirements into scalable solutions
• Provide senior-level ownership through technical leadership, mentoring and best-practice guidance
Key experience required
• Proven experience delivering production algorithms/data models (forecasting, pricing, optimisation or similar)
• Strong Python proficiency and modern data stack exposure (SQL, Pandas/NumPy + PySpark; Dask/Polars/DuckDB a bonus)
• build, schedule and optimise Spark/PySpark pipelines in Databricks (Jobs/workflows, performance tuning, production delivery)
• Hands-on experience with distributed systems and scalable data processing (Spark essential)
• Experience working with large-scale/high-frequency datasets (IoT/telemetry, smart meter, weather, time-series)
• Clear communicator able to influence design decisions, align stakeholders and operate autonomously
Nice to have
• Energy/utilities domain exposure
• Cloud ownership experience (AWS preferred, Azure also relevant)
• Experience defining microservices / modular components supporting data products
Senior Algorithm Engineer (Python / Spark – Distributed Data Processing)
Location: UK (O/IR35) / Belgium / Netherlands / Germany (B2B)
Working model: Remote
Start: ASAP
Senior Algorithm Engineer (Python / Spark – Distributed Data Processing)
We’re hiring a Senior Algorithm Engineer to join a data-intensive SaaS platform operating in a complex, regulated industry. This is a hands-on senior IC role focused on building and optimising distributed data pipelines that power pricing, forecasting and billing calculations at scale. - This is not an ML / Data Science / GenAI role
What you’ll be doing
• Design, build and deploy algorithms/data models supporting pricing, forecasting and optimisation use cases in production
• Develop and optimise distributed Spark / PySpark batch pipelines for large-scale data processing
• Write production-grade Python workflows implementing complex, explainable business logic
• Work with Databricks for job execution, orchestration and optimisation
• Improve pipeline performance, reliability and cost efficiency across high-volume workloads
• Collaborate with engineers and domain specialists to translate requirements into scalable solutions
• Provide senior-level ownership through technical leadership, mentoring and best-practice guidance
Key experience required
• Proven experience delivering production algorithms/data models (forecasting, pricing, optimisation or similar)
• Strong Python proficiency and modern data stack exposure (SQL, Pandas/NumPy + PySpark; Dask/Polars/DuckDB a bonus)
• build, schedule and optimise Spark/PySpark pipelines in Databricks (Jobs/workflows, performance tuning, production delivery)
• Hands-on experience with distributed systems and scalable data processing (Spark essential)
• Experience working with large-scale/high-frequency datasets (IoT/telemetry, smart meter, weather, time-series)
• Clear communicator able to influence design decisions, align stakeholders and operate autonomously
Nice to have
• Energy/utilities domain exposure
• Cloud ownership experience (AWS preferred, Azure also relevant)
• Experience defining microservices / modular components supporting data products






