

Trust In SODA
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with an initial 6-month contract, offering £500 - £600 outside IR35, and is remote. Key skills include Python, SQL, data architecture, cloud platforms (AWS), and data pipelines. Strong analytical and communication skills required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
600
-
🗓️ - Date
February 27, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Reading, England, United Kingdom
-
🧠 - Skills detailed
#Databases #S3 (Amazon Simple Storage Service) #Data Security #GitHub #Data Accuracy #Python #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #SQL Server #Data Encryption #Security #Data Architecture #AI (Artificial Intelligence) #Compliance #DynamoDB #Data Lake #Data Quality #Databricks #Unit Testing #Scripting #Data Ingestion #GitLab #NoSQL #Big Data #Deployment #Version Control #Scala #PostgreSQL #Data Governance #Data Engineering #Data Processing #Programming #PySpark #SQL (Structured Query Language) #GIT #Bash #Data Privacy #Java #Spark (Apache Spark) #Redshift #Cloud #Data Integration #Jenkins #Data Warehouse #Automation #Kafka (Apache Kafka) #Data Manipulation #Data Pipeline #MS SQL (Microsoft SQL Server)
Role description
Data Engineer
• Duration: Initial 6 months
• Location: Remote
• Rate: £500 - £600 Outside IR35
This role focuses on accelerating the development of the Data Platform by contributing to the cross-functional engineering teams within the Wealth business unit. The primary responsibilities involve technical development, ensuring the quality of data solutions, and collaborating with team members.
As part of the engineering team, you will work with engineering team leads, solution architects, software engineers, quality engineers, and product members to contribute to the design, development, delivery, and operation of UK Wealth data products.
Strong knowledge and recent experience in many of the following data technologies and
methodologies:
• Data Architecture Principles: Data lakes, data warehouses, ETL/ELT processes (knowledge Debezium advantageous), data modelling, and data governance.
• Programming Languages: Python (for data manipulation, analysis, and scripting), SQL (for database interaction and data querying), Pyspark, Kafka and potentially Java or Scala for big data processing.
• Database Technologies: Experience with relational databases like MS SQL Server and PostgreSQL, as well as NoSQL databases (e.g. DynamoDB).
• Data Intelligence Platforms: Working knowledge of enterprise DI platforms, specifically
• Databricks, including Data ingestion, streaming, transformation, data integration patterns and tools.
• Data Integration and Pipelines: Building, maintaining, and optimising data pipelines using tools and frameworks designed for efficient data movement and transformation.
• Cloud Data Platforms: Familiarity with cloud services for data engineering, such as AWS (e.g., S3, Redshift, Glue).
• Data Testing and Quality: Implementing data quality checks, data validation, and unit testing for data pipelines to ensure data accuracy and reliability.
• Scripting and Automation: Using Bash, PowerShell, or Python scripts to automate data-related tasks and workflows.
• Version Control: Proficient with Git and GitHub for managing code and data-related configurations.
• Build and CI/CD for Data: Understanding and applying CI/CD principles for data pipelines and deployments, including familiarity with tools like Jenkins, GitLab CI, or similar platforms.
• Data Security: Awareness of data security practices, including access control, data encryption and compliance with data privacy regulations.
• Data for AI products/solutions: Knowledge of the use of AI in the context of data. Practical
• experience of AI and AI tooling is beneficial but not required.
Key Competencies
• Strong interpersonal skills with the ability to communicate effectively at all levels
• Analytical thinker with a logical approach to problem-solving and solutions
• Flexible in approach and mindset to adapt to changing priorities and requirements
• Ability to thrive under pressure in a fast-paced environment, able to prioritise tasks and manage your own time appropriately
• Able to work both independently and collaboratively within a team environment to achieve team goals and objectives
Data Engineer
• Duration: Initial 6 months
• Location: Remote
• Rate: £500 - £600 Outside IR35
This role focuses on accelerating the development of the Data Platform by contributing to the cross-functional engineering teams within the Wealth business unit. The primary responsibilities involve technical development, ensuring the quality of data solutions, and collaborating with team members.
As part of the engineering team, you will work with engineering team leads, solution architects, software engineers, quality engineers, and product members to contribute to the design, development, delivery, and operation of UK Wealth data products.
Strong knowledge and recent experience in many of the following data technologies and
methodologies:
• Data Architecture Principles: Data lakes, data warehouses, ETL/ELT processes (knowledge Debezium advantageous), data modelling, and data governance.
• Programming Languages: Python (for data manipulation, analysis, and scripting), SQL (for database interaction and data querying), Pyspark, Kafka and potentially Java or Scala for big data processing.
• Database Technologies: Experience with relational databases like MS SQL Server and PostgreSQL, as well as NoSQL databases (e.g. DynamoDB).
• Data Intelligence Platforms: Working knowledge of enterprise DI platforms, specifically
• Databricks, including Data ingestion, streaming, transformation, data integration patterns and tools.
• Data Integration and Pipelines: Building, maintaining, and optimising data pipelines using tools and frameworks designed for efficient data movement and transformation.
• Cloud Data Platforms: Familiarity with cloud services for data engineering, such as AWS (e.g., S3, Redshift, Glue).
• Data Testing and Quality: Implementing data quality checks, data validation, and unit testing for data pipelines to ensure data accuracy and reliability.
• Scripting and Automation: Using Bash, PowerShell, or Python scripts to automate data-related tasks and workflows.
• Version Control: Proficient with Git and GitHub for managing code and data-related configurations.
• Build and CI/CD for Data: Understanding and applying CI/CD principles for data pipelines and deployments, including familiarity with tools like Jenkins, GitLab CI, or similar platforms.
• Data Security: Awareness of data security practices, including access control, data encryption and compliance with data privacy regulations.
• Data for AI products/solutions: Knowledge of the use of AI in the context of data. Practical
• experience of AI and AI tooling is beneficial but not required.
Key Competencies
• Strong interpersonal skills with the ability to communicate effectively at all levels
• Analytical thinker with a logical approach to problem-solving and solutions
• Flexible in approach and mindset to adapt to changing priorities and requirements
• Ability to thrive under pressure in a fast-paced environment, able to prioritise tasks and manage your own time appropriately
• Able to work both independently and collaboratively within a team environment to achieve team goals and objectives






