

Arrows
Senior Data Scientist
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Scientist on a 12-month rolling contract, paying £600-£700 per day. Candidates must have production-ready data science experience, preferably in Supply Chain or Logistics, and skills in Python, SQL, and cloud platforms like AWS or GCP.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
720
-
🗓️ - Date
January 11, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Databricks #BigQuery #Data Science #Python #AWS (Amazon Web Services) #NumPy #Snowflake #SciPy #GCP (Google Cloud Platform) #SQL (Structured Query Language) #Redshift #Forecasting #Airflow #Pandas #Spark (Apache Spark) #PySpark
Role description
Urgent requirement x7 Snr Data Scientists - Start ASAP!
🗓️ 12 month rolling contracts
📄 Inside IR35
💰 £600-£700 per day
🌍 UK, hybrid London (1 day office per week)
I need people who have worked on production ready applied data science products, we are not interested in research heavy backgrounds for this project. Any forecasting / optimization experience is great, particularly in Supply Chain or Logistics domains.
You’ll be comfortable with ambiguity and will want to dive deep and fix problems at scale.
The environment looks a bit like this 👇
👉 Python (pandas, numpy, scipy, PySpark)
👉 SQL
👉 AWS or GCP (BigQuery / RedShift / Snowflake)
👉 Spark / Databricks
👉 Airflow / Dagster
Urgent requirement x7 Snr Data Scientists - Start ASAP!
🗓️ 12 month rolling contracts
📄 Inside IR35
💰 £600-£700 per day
🌍 UK, hybrid London (1 day office per week)
I need people who have worked on production ready applied data science products, we are not interested in research heavy backgrounds for this project. Any forecasting / optimization experience is great, particularly in Supply Chain or Logistics domains.
You’ll be comfortable with ambiguity and will want to dive deep and fix problems at scale.
The environment looks a bit like this 👇
👉 Python (pandas, numpy, scipy, PySpark)
👉 SQL
👉 AWS or GCP (BigQuery / RedShift / Snowflake)
👉 Spark / Databricks
👉 Airflow / Dagster






