

Lumenalta (formerly Clevertech)
Senior Data Engineer (Remote)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Remote) with a contract length of "evergreen" and a pay rate of "competitive." Key skills include 7+ years of experience, proficiency in Python or Java, advanced SQL, and familiarity with cloud services.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
March 3, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United Kingdom
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Agile #Lambda (AWS Lambda) #Data Processing #AWS (Amazon Web Services) #Normalization #AWS S3 (Amazon Simple Storage Service) #Data Modeling #Storage #Python #Data Analysis #EC2 #Datasets #Data Governance #Unit Testing #Data Engineering #Scala #Airflow #Data Pipeline #Java #Batch #S3 (Amazon Simple Storage Service) #Data Quality #Cloud #GCP (Google Cloud Platform) #Kafka (Apache Kafka) #SQL (Structured Query Language)
Role description
What We're Working On
We help global enterprises launch digital products that reach millions of users. Our projects involve massive datasets, complex pipelines, and real-world impact across industries.
What Youβll Do
β’ Join the team as a senior-level Data Engineer with strong ownership of data pipelines and architecture
β’ Design, build, and maintain reliable ETL pipelines from the ground up
β’ Work with large, complex datasets using Python or Java and advanced SQL
β’ Build scalable and efficient data flows and transformations across multiple systems
β’ Partner with data analysts, product managers, and engineers to deliver high-quality, actionable data
β’ Ensure data quality, consistency, performance, and reliability across the data platform
What Weβre Looking For
β’ 7+ years of experience as a Data Engineer
β’ Strong skills in Python or Java for data processing
β’ Proficient in SQL, especially for querying large datasets
β’ Experience with batch and/or stream data processing pipelines
β’ Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.)
β’ Knowledge of data modeling, normalization, and performance optimization
β’ Comfortable working in agile, collaborative, and fully remote environments
β’ Fluent in English (spoken and written)
Nice to Have (Not Required)
β’ Experience with Airflow, Kafka, or similar orchestration/message tools
β’ Exposure to basic data governance or privacy standards
β’ Unit testing and CI/CD pipelines for data workflows
This job is 100% Remote β please ensure you have a comfortable home office setup in your preferred work location.
This is a fully remote position open to candidates based in Europe or regions with compatible time zones. To ensure effective collaboration with our client and team, candidates must maintain a 6-hour overlap with Eastern U.S. business hours.
Application Deadline
This role is an evergreen position with no predetermined start date. Applications will be accepted until March 29, 2026. As we continue to build our talent pipeline, the position may be reposted to allow us to connect with additional qualified professionals.
What We're Working On
We help global enterprises launch digital products that reach millions of users. Our projects involve massive datasets, complex pipelines, and real-world impact across industries.
What Youβll Do
β’ Join the team as a senior-level Data Engineer with strong ownership of data pipelines and architecture
β’ Design, build, and maintain reliable ETL pipelines from the ground up
β’ Work with large, complex datasets using Python or Java and advanced SQL
β’ Build scalable and efficient data flows and transformations across multiple systems
β’ Partner with data analysts, product managers, and engineers to deliver high-quality, actionable data
β’ Ensure data quality, consistency, performance, and reliability across the data platform
What Weβre Looking For
β’ 7+ years of experience as a Data Engineer
β’ Strong skills in Python or Java for data processing
β’ Proficient in SQL, especially for querying large datasets
β’ Experience with batch and/or stream data processing pipelines
β’ Familiarity with cloud-based storage and compute (e.g., AWS S3, EC2, Lambda, GCP Cloud Storage, etc.)
β’ Knowledge of data modeling, normalization, and performance optimization
β’ Comfortable working in agile, collaborative, and fully remote environments
β’ Fluent in English (spoken and written)
Nice to Have (Not Required)
β’ Experience with Airflow, Kafka, or similar orchestration/message tools
β’ Exposure to basic data governance or privacy standards
β’ Unit testing and CI/CD pipelines for data workflows
This job is 100% Remote β please ensure you have a comfortable home office setup in your preferred work location.
This is a fully remote position open to candidates based in Europe or regions with compatible time zones. To ensure effective collaboration with our client and team, candidates must maintain a 6-hour overlap with Eastern U.S. business hours.
Application Deadline
This role is an evergreen position with no predetermined start date. Applications will be accepted until March 29, 2026. As we continue to build our talent pipeline, the position may be reposted to allow us to connect with additional qualified professionals.






