Lumenalta (formerly Clevertech)

Data Engineer - Senior

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 7+ years of experience, proficient in Python or Java and SQL. It offers a fully remote contract with a competitive pay rate. Key skills include ETL, cloud-based storage, and data processing.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 28, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Data Analysis #Cloud #Java #Lambda (AWS Lambda) #SQL (Structured Query Language) #EC2 #Data Governance #Scala #GCP (Google Cloud Platform) #Unit Testing #S3 (Amazon Simple Storage Service) #Storage #Data Processing #Agile #Datasets #AWS S3 (Amazon Simple Storage Service) #Batch #Normalization #Data Engineering #Data Quality #Python #Data Modeling #Airflow #AWS (Amazon Web Services) #Kafka (Apache Kafka)
Role description
What We're Working On We help global enterprises launch digital products that reach millions of users. Our projects involve massive datasets, complex pipelines, and real-world impact across industries. What You’ll Do β€’ Join the team as a Senior-Level Data Engineer β€’ Design, build, and maintain reliable ETL pipelines from the ground up β€’ Work with large, complex datasets using Python or Java and raw SQL β€’ Build scalable, efficient data flows and transformations β€’ Collaborate with data analysts, product managers, and developers to deliver actionable data to stakeholders β€’ Ensure data quality, consistency, and performance across systems What We’re Looking For β€’ 7+ years of experience as a Data Engineer β€’ Strong skills in Python or Java for data processing β€’ Proficient in SQL, especially for querying large datasets β€’ Experience with batch and/or stream data processing pipelines β€’ Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.) β€’ Knowledge of data modeling, normalization, and performance optimization β€’ Comfortable working in agile, collaborative, and fully remote environments β€’ Fluent in English (spoken and written) Nice to Have (Not Required) β€’ Experience with Airflow, Kafka, or similar orchestration/message tools β€’ Exposure to basic data governance or privacy standards β€’ Unit testing and CI/CD pipelines for data workflows This job is 100% Remote – please ensure you have a comfortable home office setup in your preferred work location. This is a fully remote position open to candidates based in Europe or regions with compatible time zones. To ensure effective collaboration with our client and team, candidates must maintain a 6-hour overlap with Eastern U.S. business hours. This is an evergreen opening with no set deadline; we’re always excited to connect with professionals who want to help us build the future