Searches @ Wenham Carter

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer, initially 6 months, paying £400-500 per day, fully remote. Requires 3+ years' experience, strong Databricks and Apache Spark skills, proficiency in Python and SQL, and hands-on experience with AWS or Azure services.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
520
-
🗓️ - Date
January 17, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Python #SQL (Structured Query Language) #Cloud #Version Control #Redshift #Azure ADLS (Azure Data Lake Storage) #AWS (Amazon Web Services) #Lambda (AWS Lambda) #Data Modeling #Databricks #Spark (Apache Spark) #Azure Databricks #Automated Testing #Delta Lake #Monitoring #Data Quality #Scala #Data Engineering #Azure cloud #S3 (Amazon Simple Storage Service) #Azure #"ETL (Extract #Transform #Load)" #ADLS (Azure Data Lake Storage) #ML (Machine Learning) #AWS S3 (Amazon Simple Storage Service) #IAM (Identity and Access Management) #Data Pipeline #Batch #Synapse #Databases #Apache Spark #Vault
Role description
We are currently recruiting a Data Engineer for one of our clients. The role is outside IR35 and is paying £400-500 per day, it will initially be for 6 months. It is also fully remote. Key Responsibilities • Design, develop, and maintain batch and streaming data pipelines using Databricks (Apache Spark) • Build and optimize ETL/ELT workflows for large-scale structured and unstructured data • Implement Delta Lake architectures (Bronze/Silver/Gold layers) • Integrate data from multiple sources (databases, APIs, event streams, files) • Optimize Spark jobs for performance, scalability, and cost • Manage data quality, validation, and monitoring • Collaborate with analytics and ML teams to support reporting and model development • Implement CI/CD, version control, and automated testing for data pipelines Required Qualifications • 3+ years of experience as a Data Engineer • Strong experience with Databricks and Apache Spark • Proficiency in Python (required); SQL (advanced) • Hands-on experience with AWS or Azure cloud services: o AWS: S3, EMR, Glue, Redshift, Lambda, IAM o Azure: ADLS Gen2, Azure Databricks, Synapse, Data Factory, Key Vault • Experience with Delta Lake, Parquet, and data modeling