Crossing Hurdles

Data Engineer | $60/hr Remote

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer – AI Trainer, offering $60/hr on a remote, hourly contract for 10 to 40 hours per week. Key skills include Hadoop, Spark, Kafka, and cloud platforms. A BSc in a related field is required.
🌎 - Country
United Kingdom
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
February 1, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Python #GCP (Google Cloud Platform) #Cloud #Spark (Apache Spark) #Data Pipeline #Programming #Hadoop #Scala #Azure #Data Architecture #ML (Machine Learning) #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Computer Science #Big Data #AI (Artificial Intelligence) #Scripting #Data Science #Data Engineering #Java #Security #Data Processing
Role description
Position: Data Engineer – AI Trainer Type: Hourly contract Compensation: $ 30 - $60/hr Location: Remote Commitment: 10 to 40 hours per week Role Responsibilities • Design, develop, and optimize large scale data pipelines using Hadoop, Spark, and related big data technologies. • Build and maintain scalable data architectures that support AI model training and analytics workloads. • Integrate and manage real time data streams using Kafka, ensuring data reliability and quality. • Deploy, orchestrate, and monitor distributed data processing systems on cloud platforms. • Collaborate closely with data scientists and machine learning engineers to enable AI and LLM initiatives. • Document complex data workflows and create clear training materials for technical teams. • Enforce best practices across data engineering, including performance optimization, security, and scalability. • Support AI and generative AI use cases through high quality data curation and pipeline design. Requirements • BSc in Computer Science, Data Engineering, or a closely related field. • Strong hands on experience with big data technologies including Hadoop and Spark. • Proven expertise using Kafka for real time data streaming and integration. • Solid background in data engineering with experience building and scaling ETL pipelines. • Practical experience working with major cloud platforms such as AWS, GCP, or Azure. • Proficiency in programming or scripting languages such as Python, Scala, or Java. • Excellent written and verbal communication skills with the ability to explain complex technical concepts. • Strong problem solving and troubleshooting skills in distributed systems. • Ability to work independently in a fully remote, collaborative environment. Application Process (Takes 20 Min) • Upload resume • Interview (15 min) • Submit form