Data Engineer - Senior

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 7+ years of experience, focusing on building ETL pipelines using Python or Java and SQL. It offers a fully remote contract with ongoing recruitment and requires cloud storage proficiency.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 11, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#GCP (Google Cloud Platform) #SQL (Structured Query Language) #Batch #Data Processing #"ETL (Extract #Transform #Load)" #Data Modeling #Datasets #EC2 #Scala #Unit Testing #AWS (Amazon Web Services) #AWS S3 (Amazon Simple Storage Service) #Data Quality #S3 (Amazon Simple Storage Service) #Storage #Python #Lambda (AWS Lambda) #Airflow #Agile #Data Analysis #Data Governance #Data Engineering #Kafka (Apache Kafka) #Java #Normalization #Cloud
Role description
What We're Working On We help global enterprises launch digital products that reach millions of users. Our projects involve massive datasets, complex pipelines, and real-world impact across industries. What You’ll Do β€’ Join the team as a Senior-Level Data Engineer β€’ Design, build, and maintain reliable ETL pipelines from the ground up β€’ Work with large, complex datasets using Python or Java and raw SQL β€’ Build scalable, efficient data flows and transformations β€’ Collaborate with data analysts, product managers, and developers to deliver actionable data to stakeholders β€’ Ensure data quality, consistency, and performance across systems What We’re Looking For β€’ 7+ years of experience as a Data Engineer β€’ Strong skills in Python or Java for data processing β€’ Proficient in SQL, especially for querying large datasets β€’ Experience with batch and/or stream data processing pipelines β€’ Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.) β€’ Knowledge of data modeling, normalization, and performance optimization β€’ Comfortable working in agile, collaborative, and fully remote environments β€’ Fluent in English (spoken and written) Nice to Have (Not Required) β€’ Experience with Airflow, Kafka, or similar orchestration/message tools β€’ Exposure to basic data governance or privacy standards β€’ Unit testing and CI/CD pipelines for data workflows This job is 100% Remote – please ensure you have a comfortable home office setup in your preferred work location. Ongoing recruitment – no set deadline.