Data Engineer - Senior

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer for a 100% remote contract, requiring 7+ years of experience, strong Python or Java skills, and proficiency in SQL. Familiarity with cloud storage and data processing pipelines is essential.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
June 7, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Data Analysis #EC2 #Storage #Agile #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Airflow #Datasets #Data Quality #SQL (Structured Query Language) #Unit Testing #Data Governance #Data Processing #Lambda (AWS Lambda) #Data Modeling #GCP (Google Cloud Platform) #Scala #Data Engineering #Normalization #AWS S3 (Amazon Simple Storage Service) #Python #Kafka (Apache Kafka) #Batch #Java #"ETL (Extract #Transform #Load)" #Cloud
Role description
What We're Working On We help global enterprises launch digital products that reach millions of users. Our projects involve massive datasets, complex pipelines, and real-world impact across industries. What You’ll Do β€’ Join the team as a Senior-Level Data Engineer β€’ Design, build, and maintain reliable ETL pipelines from the ground up β€’ Work with large, complex datasets using Python or Java and raw SQL β€’ Build scalable, efficient data flows and transformations β€’ Collaborate with data analysts, product managers, and developers to deliver actionable data to stakeholders β€’ Ensure data quality, consistency, and performance across systems What We’re Looking For β€’ 7+ years of experience as a Data Engineer β€’ Strong skills in Python or Java for data processing β€’ Proficient in SQL, especially for querying large datasets β€’ Experience with batch and/or stream data processing pipelines β€’ Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.) β€’ Knowledge of data modeling, normalization, and performance optimization β€’ Comfortable working in agile, collaborative, and fully remote environments β€’ Fluent in English (spoken and written) Nice to Have (Not Required) β€’ Experience with Airflow, Kafka, or similar orchestration/message tools β€’ Exposure to basic data governance or privacy standards β€’ Unit testing and CI/CD pipelines for data workflows This job is 100% Remote – please ensure you have a comfortable home office setup in your preferred work location. Ongoing recruitment – no set deadline.