

Lumenalta (formerly Clevertech)
Senior Data Engineer (Remote)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Remote) with 7+ years of experience, strong skills in Python or Java, and proficiency in SQL. Contract length is unspecified, with a pay rate of "unknown." Candidates must be based in Europe or compatible time zones.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
December 30, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United Kingdom
-
π§ - Skills detailed
#GCP (Google Cloud Platform) #Normalization #Data Quality #EC2 #Data Analysis #AWS (Amazon Web Services) #SQL (Structured Query Language) #Airflow #Data Pipeline #Scala #Agile #Data Governance #S3 (Amazon Simple Storage Service) #Batch #Datasets #Lambda (AWS Lambda) #"ETL (Extract #Transform #Load)" #Data Engineering #Kafka (Apache Kafka) #Java #Python #Storage #Data Modeling #AWS S3 (Amazon Simple Storage Service) #Cloud #Unit Testing #Data Processing
Role description
What We're Working On
We help global enterprises launch digital products that reach millions of users. Our projects involve massive datasets, complex pipelines, and real-world impact across industries.
What Youβll Do
β’ Join the team as a senior-level Data Engineer with strong ownership of data pipelines and architecture
β’ Design, build, and maintain reliable ETL pipelines from the ground up
β’ Work with large, complex datasets using Python or Java and advanced SQL
β’ Build scalable and efficient data flows and transformations across multiple systems
β’ Partner with data analysts, product managers, and engineers to deliver high-quality, actionable data
β’ Ensure data quality, consistency, performance, and reliability across the data platform
What Weβre Looking For
β’ 7+ years of experience as a Data Engineer
β’ Strong skills in Python or Java for data processing
β’ Proficient in SQL, especially for querying large datasets
β’ Experience with batch and/or stream data processing pipelines
β’ Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.)
β’ Knowledge of data modeling, normalization, and performance optimization
β’ Comfortable working in agile, collaborative, and fully remote environments
β’ Fluent in English (spoken and written)
Nice to Have (Not Required)
β’ Experience with Airflow, Kafka, or similar orchestration/message tools
β’ Exposure to basic data governance or privacy standards
β’ Unit testing and CI/CD pipelines for data workflows
This job is 100% Remote β please ensure you have a comfortable home office setup in your preferred work location.
This is a fully remote position open to candidates based in Europe or regions with compatible time zones. To ensure effective collaboration with our client and team, candidates must maintain a 6-hour overlap with Eastern U.S. business hours.
This is an evergreen opening with no set deadline; weβre always excited to connect with professionals who want to help us build the future
What We're Working On
We help global enterprises launch digital products that reach millions of users. Our projects involve massive datasets, complex pipelines, and real-world impact across industries.
What Youβll Do
β’ Join the team as a senior-level Data Engineer with strong ownership of data pipelines and architecture
β’ Design, build, and maintain reliable ETL pipelines from the ground up
β’ Work with large, complex datasets using Python or Java and advanced SQL
β’ Build scalable and efficient data flows and transformations across multiple systems
β’ Partner with data analysts, product managers, and engineers to deliver high-quality, actionable data
β’ Ensure data quality, consistency, performance, and reliability across the data platform
What Weβre Looking For
β’ 7+ years of experience as a Data Engineer
β’ Strong skills in Python or Java for data processing
β’ Proficient in SQL, especially for querying large datasets
β’ Experience with batch and/or stream data processing pipelines
β’ Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.)
β’ Knowledge of data modeling, normalization, and performance optimization
β’ Comfortable working in agile, collaborative, and fully remote environments
β’ Fluent in English (spoken and written)
Nice to Have (Not Required)
β’ Experience with Airflow, Kafka, or similar orchestration/message tools
β’ Exposure to basic data governance or privacy standards
β’ Unit testing and CI/CD pipelines for data workflows
This job is 100% Remote β please ensure you have a comfortable home office setup in your preferred work location.
This is a fully remote position open to candidates based in Europe or regions with compatible time zones. To ensure effective collaboration with our client and team, candidates must maintain a 6-hour overlap with Eastern U.S. business hours.
This is an evergreen opening with no set deadline; weβre always excited to connect with professionals who want to help us build the future






