

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract in London for 6 months (Outside IR35), requiring strong Python, PySpark, SQL, and cloud data tools experience. Familiarity with commodities trading and data integration is preferred. On-site work is required 4 days a week.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
June 18, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Outside IR35
-
π - Security clearance
Unknown
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Debugging #Data Quality #Scala #Cloud #Azure #Visualization #GIT #Data Lake #ADF (Azure Data Factory) #Spark (Apache Spark) #SQL (Structured Query Language) #Data Integration #Data Engineering #Data Science #"ETL (Extract #Transform #Load)" #Data Pipeline #Azure Data Factory #Synapse #Python #Pandas #Databricks #PySpark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Data Engineer
Location: London (4 days/week on-site)
Type: Contract (Outside IR35)
Duration: 6 months
A leading physical commodities trading firm is seeking a Data Engineer for a 6-month contract (Outside IR35). Youβll work on-site in London 4 days per week, building and optimizing data pipelines in a fast-paced, data-driven trading environment.
Responsibilities:
β’ Build and maintain scalable data pipelines (ETL/ELT)
β’ Support data users with ingestion, debugging, and navigation
β’ Integrate data from diverse sources (structured & unstructured)
β’ Monitor and ensure data quality
β’ Maintain and improve existing production processes
β’ Develop dashboards and visualizations
β’ Work closely with data scientists and stakeholders
β’ Follow CI/CD and code best practices (Git, testing, reviews)
Tech Stack & Experience:
β’ Strong Python (Pandas), PySpark, and SQL skills
β’ Cloud data tools (Azure Data Factory, Synapse, Databricks, etc.)
β’ Data integration experience across formats and platforms
β’ Strong communication and data literacy
Nice to Have:
β’ Commodities/trading background
β’ Experience with data lakes and open-source data tools
β’ Familiarity with governance frameworks (OPA, Ranger)