iXceed Solutions

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Python Enterprise Developer) in London (Canary Wharf) for 9+ months at a pay rate of "rate". Requires 8+ years of Python development, SQL expertise, and experience with ETL pipelines, preferably in trading or energy sectors.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 12, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Azure Databricks #GIT #Data Processing #Data Pipeline #Airflow #Agile #Big Data #Spark (Apache Spark) #Data Engineering #Azure #Databricks #AWS S3 (Amazon Simple Storage Service) #Scrum #DevOps #AWS (Amazon Web Services) #Libraries #Azure DevOps #Apache Airflow #"ETL (Extract #Transform #Load)" #Complex Queries #Python #PySpark #NumPy #SQL (Structured Query Language) #Pandas
Role description
Data Engineer (Python Enterprise Developer) Location: London – Canary Wharf Work Mode: Hybrid (4 days onsite/week) Contract Duration: 9+ Months Role Overview We are looking for a highly hands-on Senior Python Data Engineer with strong core Python development expertise and experience building native Python ETL/data processing solutions. The role requires deep practical coding capability rather than heavy dependency on Spark/Databricks abstractions. Candidates must be confident in live coding environments and demonstrate strong real-world experience with Python data libraries, SQL engineering, and production-grade data pipelines. Preference will be given to candidates with front-office, trading, or energy trading domain exposure. Mandatory Skills • Python • Postgres SQL • Azure Databricks • AWS (S3) • Apache Airflow • Git • Azure DevOps / CI/CD Required Experience • 8+ years of hands-on Python development • Strong experience with: • Pandas • Polars • NumPy • Selenium • BeautifulSoup • Requests • Experience building native Python ETL pipelines for scraping, transformation, and processing • Strong SQL capability including complex queries and data modelling • Experience with AWS services and Azure Databricks • Hands-on CI/CD and DevOps workflow exposure • Agile/Scrum delivery experience Preferred Experience • Energy trading / trading domain background • Front-office or fast-paced data environments • Exposure to big data processing and Spark Important Notes • Strong core Python coding ability is essential • Candidates must be comfortable with live coding interviews • Pure PySpark/Databricks-focused profiles without strong native Python expertise may not be suitable • Preference for hands-on engineers over coordination-only profiles