Data Engineer (Contract)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Contract) with an initial 6-month term, paying £450-£500 per day, predominantly remote. Requires 3-5 years of data engineering experience, proficiency in Python, ETL pipelines, SQL, and familiarity with financial data and cloud platforms.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
480
-
🗓️ - Date discovered
May 21, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
📄 - Contract type
Outside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Python #Data Analysis #Datasets #ML (Machine Learning) #MySQL #AWS (Amazon Web Services) #Libraries #Azure #Cloud #Hadoop #Data Science #Web Scraping #Pandas #Data Engineering #Normalization #GCP (Google Cloud Platform) #NumPy #Data Cleaning #"ETL (Extract #Transform #Load)" #TensorFlow #Database Systems #Spark (Apache Spark) #PostgreSQL #Big Data #SQL (Structured Query Language) #Data Extraction #Scala #SaaS (Software as a Service) #Strategy
Role description
Data Engineer - Contract £450-£500 per day | Outside IR35 6-Month Initial Contract Predominantly Remote | Occasional Office Visits Required We are working with a fast-growing SaaS organization that plays a key role in providing data-driven solutions across the financial services sector. As part of their mission to scale impactful products, they are looking to expand their data capabilities and optimize the quality and availability of insights across their platform. This role is crucial for enhancing their current architecture, integrating diverse data sources, and enabling predictive and prescriptive analytics that will directly influence business strategy and client delivery. Key responsibilities • Design, deploy, and maintain Python-based web crawlers using tools such as Scrapy, BeautifulSoup, or Selenium • Implement scalable and reliable web scraping frameworks for high-volume data extraction across websites and social media platforms • Perform data cleaning, standardization, and normalization to ensure consistency and quality across all datasets • Build and maintain ETL pipelines for processing structured and unstructured data • Conduct data analysis and modeling using tools like Pandas, NumPy, Scikit-learn, and TensorFlow • Leverage financial data expertise to identify trends, patterns, and anomalies within complex datasets • Support and improve SQL-based queries and work with database systems including PostgreSQL and MySQL • Collaborate with cross-functional teams, including data scientists, analysts, and product stakeholders, to support data-driven decision-making • Work with cloud environments such as AWS, Azure, or GCP, and explore opportunities to scale infrastructure Required experience and skills • 3-5 years of experience in a data engineering or similar role • Proficiency in Python for web crawling using libraries like Scrapy, BeautifulSoup, or Selenium • Strong understanding of data cleaning, standardization, and normalization techniques • Experience building ETL/ELT pipelines and working with large-scale data workflows • Hands-on experience with data analysis and machine learning libraries such as Pandas, NumPy, Scikit-learn, or TensorFlow • Familiarity with SQL and relational database systems (e.g., PostgreSQL, MySQL) • Exposure to cloud platforms such as AWS, Azure, or GCP • Experience with big data tools such as Spark and Hadoop • Previous experience working with financial data, including understanding of financial metrics and industry trends