Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 6-month contract, remote. Key skills include GCP, BigQuery, advanced SQL, and ETL tools. Requires 3+ years of experience in Data Engineering, CI/CD workflows, and data governance knowledge.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 12, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Pipeline #dbt (data build tool) #DevOps #Data Warehouse #Schema Design #Dataflow #Logging #Looker #Monitoring #Storage #SQL (Structured Query Language) #Automation #Airflow #Observability #Data Modeling #Scala #Data Storage #Datasets #BigQuery #GIT #Terraform #Data Governance #PostgreSQL #GCP (Google Cloud Platform) #Apache Beam #Cloud #Data Quality #Data Engineering #Kafka (Apache Kafka) #BI (Business Intelligence) #ML (Machine Learning)
Role description
We are seeking, on behalf of our client, a highly skilled Data Engineer to support a fast-paced, data-driven project. You will work closely with Cloud/DevOps Engineers to ensure data pipelines are robust, scalable, and seamlessly integrated into a modern GitOps-driven infrastructure. Responsibilities: β€’ Design, build, and maintain scalable data pipelines and workflows on GCP. β€’ Integrate pipelines with BigQuery, PostgreSQL, and other cloud-native services. β€’ Implement data quality checks, validation frameworks, and monitoring to ensure reliability. β€’ Collaborate with DevOps team to deploy pipelines using Terraform, ArgoCD, and CI/CD automation. β€’ Optimize data storage, partitioning, and processing for performance and cost efficiency. β€’ Support analytics and BI teams by providing clean datasets for tools like Superset, Looker Studio. β€’ Document workflows, schemas, and processes for long-term maintainability. Requirements: β€’ 3+ years of experience in Data Engineering or similar roles. β€’ Advanced SQL skills and hands-on experience with BigQuery or other MPP data warehouses. β€’ Proficiency with ETL/ELT tools (dbt, Dataflow, Apache Beam, Airflow, etc.). β€’ Familiarity with CI/CD workflows, Git, and Infrastructure-as-Code. β€’ Knowledge of data modeling, schema design, and best practices for cloud data pipelines. β€’ Understanding of observability (logging, metrics, tracing) and data governance. β€’ Bonus: experience with streaming data (Pub/Sub, Kafka) or machine learning pipelines. Contract Details: β€’ Duration: 6 months (extendable based on project needs) β€’ Location: Remote β€’ Engagement: Contract Why Join Us: β€’ You will be part of a dynamic project where DevOps and Data Engineering intersect, enabling you to work on cutting-edge cloud infrastructure, influence architectural decisions, and collaborate with a highly skilled technical team.