

Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 6-month contract in Redhill, Surrey, offering up to £500 per day (inside IR35). Key skills include Big Data stack proficiency, Azure platforms, SQL, Python, and RESTful API design.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
500
-
🗓️ - Date discovered
June 10, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Inside IR35
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Redhill, England, United Kingdom
-
🧠 - Skills detailed
#Monitoring #ADF (Azure Data Factory) #SQL (Structured Query Language) #Azure DevOps #Data Warehouse #Spark (Apache Spark) #Data Lake #Business Objects #Compliance #Schema Design #Python #Databricks #Data Ingestion #Data Mart #Cloud #Automation #ADLS (Azure Data Lake Storage) #DevOps #Azure #BI (Business Intelligence) #Data Quality #GIT #Big Data #Data Processing #"ETL (Extract #Transform #Load)" #Deployment #Microsoft Power BI #API (Application Programming Interface) #Data Engineering #R #BO (Business Objects)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Data Engineer
6 months
Redhill, Surrey - hybrid working (2-3 days pw in office)
Up to £500 per day (inside IR35)
Reporting into the Head of Data & Analytics, this role sits in the Data & Analytics team alongside Data Developers and Data Engineers.
Key Responsibilities:
• Development and automation of ingestion flows, data curation and access layers
- Define technical solutions and take part in the development and deployment of applications in accordance with central standard and guidelines
- Contribute to the improvement of standards, compliance and processes to guide Data Lake evolution
- Design, develop, test and deploy data ingestion flows, data marts and core target data components
- Implement tools and end-to-end monitoring to ensure high availability of production data processing, data quality and reliability
• Maintenance of data transformation routes in the Data Lake
- Maintenance of BAU including reporting regularly on performance, risks and issues
Key Skills:
• Experience working within multi-disciplinary data teams
• Fluent with the Big Data stack in cloud environment (ADLS, Databricks, ADF, Azure DevOps etc)
• Knowledge and experience of reporting solution design including Azure platforms, Pegasus, Attunity, Wax, R, SQL, Python, Spark, Power BI, Business Objects
• Proficient in RESTful API design, implementation and consumption
• You have a deep understanding of data lake technologies, interrogation of structures and unstructured data, data warehouse and data schema design for optimised reporting performance
• You have a very good understanding of spark processes
• You are fluent in multiple development languages
• You have a good understanding of the software development process (git, ci/cd etc.)
Please note that we are unable to offer visa sponsorship.
We are committed to creating an inclusive recruitment experience. If you have a disability or long-term health condition and require adjustments to the recruitment process, our Adjustment Concierge Service is here to support you. Please reach out to us at adjustments@robertwalters.com to discuss further.