

Premier Group Recruitment
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 6-month contract, fully remote, at £350/day. Requires 3+ years in Data Engineering, strong SQL and Python skills, cloud platform experience (AWS/Azure/GCP), and ETL/ELT pipeline development expertise.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
350
-
🗓️ - Date
May 13, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
England, United Kingdom
-
🧠 - Skills detailed
#Terraform #GIT #Python #ML (Machine Learning) #Data Modeling #Security #Data Science #Data Pipeline #Big Data #Docker #Complex Queries #Batch #Spark (Apache Spark) #Scala #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #Kubernetes #dbt (data build tool) #BigQuery #Data Processing #GitHub #GCP (Google Cloud Platform) #Data Governance #Databricks #Cloud #Redshift #Automation #Snowflake #AWS (Amazon Web Services) #Synapse #Data Warehouse #Azure #SQL (Structured Query Language) #Data Quality #AI (Artificial Intelligence) #Airflow #Data Engineering
Role description
Data Engineer
6 Month Contract
Outside IR35
Fully Remote
£350/Day
About the Role
We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and cloud-based data platforms that support analytics, reporting, and machine learning initiatives. You will work closely with Analytics, Product, Engineering, and Business teams to ensure reliable and high-quality data is available across the organization.
The ideal candidate has strong SQL and Python skills, experience with modern data stack technologies, and a solid understanding of data modeling, ETL/ELT processes, and cloud infrastructure.
Key Responsibilities
• Design, develop, and maintain scalable ETL/ELT pipelines
• Build and optimize data models and data warehouse architectures
• Integrate data from multiple internal and external sources
• Ensure data quality, integrity, security, and governance standards
• Monitor and troubleshoot data pipeline performance and failures
• Collaborate with analysts, data scientists, and software engineers to support business requirements
• Improve data platform scalability, reliability, and cost efficiency
• Automate manual data processes and workflows
• Document data systems, transformations, and operational procedures
• Support real-time and batch data processing solutions
Required Skills & Experience
• 3+ years of experience in Data Engineering or related roles
• Strong SQL skills and experience optimizing complex queries
• Proficiency in Python for data processing and automation
• Experience with ETL/ELT pipeline development
• Hands-on experience with cloud platforms such as AWS, Azure, or GCP
• Experience with data warehousing technologies such as Snowflake, Redshift, BigQuery, or Synapse
• Familiarity with orchestration tools such as Airflow or Dagster
• Experience with big data processing frameworks such as Spark or Databricks
• Understanding of data modeling concepts and best practices
• Experience working with Git and CI/CD workflows
Preferred Qualifications
• Experience with streaming technologies such as Kafka or Kinesis
• Experience with dbt and modern data stack tools
• Knowledge of infrastructure-as-code tools such as Terraform
• Experience supporting machine learning or AI data workflows
• Familiarity with data governance and security best practices
• Exposure to containerization technologies such as Docker and Kubernetes
Tech Stack
• Python
• SQL
• Airflow
• Spark / Databricks
• Snowflake / BigQuery / Redshift
• AWS / Azure / GCP
• dbt
• Kafka
• GitHub Actions / CI-CD
If this sounds like you, apply now or get in touch via email pgeorge@pg-rec.com
Data Engineer
6 Month Contract
Outside IR35
Fully Remote
£350/Day
About the Role
We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and cloud-based data platforms that support analytics, reporting, and machine learning initiatives. You will work closely with Analytics, Product, Engineering, and Business teams to ensure reliable and high-quality data is available across the organization.
The ideal candidate has strong SQL and Python skills, experience with modern data stack technologies, and a solid understanding of data modeling, ETL/ELT processes, and cloud infrastructure.
Key Responsibilities
• Design, develop, and maintain scalable ETL/ELT pipelines
• Build and optimize data models and data warehouse architectures
• Integrate data from multiple internal and external sources
• Ensure data quality, integrity, security, and governance standards
• Monitor and troubleshoot data pipeline performance and failures
• Collaborate with analysts, data scientists, and software engineers to support business requirements
• Improve data platform scalability, reliability, and cost efficiency
• Automate manual data processes and workflows
• Document data systems, transformations, and operational procedures
• Support real-time and batch data processing solutions
Required Skills & Experience
• 3+ years of experience in Data Engineering or related roles
• Strong SQL skills and experience optimizing complex queries
• Proficiency in Python for data processing and automation
• Experience with ETL/ELT pipeline development
• Hands-on experience with cloud platforms such as AWS, Azure, or GCP
• Experience with data warehousing technologies such as Snowflake, Redshift, BigQuery, or Synapse
• Familiarity with orchestration tools such as Airflow or Dagster
• Experience with big data processing frameworks such as Spark or Databricks
• Understanding of data modeling concepts and best practices
• Experience working with Git and CI/CD workflows
Preferred Qualifications
• Experience with streaming technologies such as Kafka or Kinesis
• Experience with dbt and modern data stack tools
• Knowledge of infrastructure-as-code tools such as Terraform
• Experience supporting machine learning or AI data workflows
• Familiarity with data governance and security best practices
• Exposure to containerization technologies such as Docker and Kubernetes
Tech Stack
• Python
• SQL
• Airflow
• Spark / Databricks
• Snowflake / BigQuery / Redshift
• AWS / Azure / GCP
• dbt
• Kafka
• GitHub Actions / CI-CD
If this sounds like you, apply now or get in touch via email pgeorge@pg-rec.com






