micro1

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer- AI trainer, contract (full-time or part-time), UK remote, paying competitive rates. Requires BSc in Computer Science, expertise in Hadoop, Spark, Kafka, cloud platforms, and programming skills. Preferred: AI training experience and industry certifications.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 20, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Big Data #Spark (Apache Spark) #Data Architecture #"ETL (Extract #Transform #Load)" #Hadoop #Data Engineering #Programming #Data Pipeline #Data Quality #ML (Machine Learning) #Security #AI (Artificial Intelligence) #AWS (Amazon Web Services) #Scala #Data Science #Data Processing #Azure #Kafka (Apache Kafka) #Cloud #GCP (Google Cloud Platform) #Python #Java #Scripting #Computer Science
Role description
Job Title: Data Engineer- AI trainer Job Type: Contract (full-time or part-time) Location: UK, Remote Job Summary: Join our customer’s team as a Data Engineer- AI trainer, where you will play a pivotal role in shaping the next generation of AI systems through expert data engineering. As part of an innovative, remote-first environment, you will collaborate with cross-functional teams to build robust, scalable big data solutions and help train AI models for real-world impact. Key Responsibilities: β€’ Design, develop, and optimize large-scale data pipelines using Hadoop, Spark, and related technologies. β€’ Build and maintain efficient data architectures to support AI model training and analytics. β€’ Integrate real-time data streams via Kafka and ensure high data quality and reliability. β€’ Leverage cloud platforms to deploy, orchestrate, and monitor distributed data processing workloads. β€’ Collaborate closely with data scientists and machine learning engineers to deliver seamless data solutions for AI initiatives. β€’ Document complex data workflows and provide clear training resources to empower team members. β€’ Champion best practices in data engineering, ensuring security, scalability, and performance. Required Skills and Qualifications: β€’ BSc in Computer Science, Data Engineering, or related field. β€’ Proven expertise in big data technologies including Hadoop and Spark. β€’ Strong experience with Kafka for stream processing and integration. β€’ Solid background in data engineering with proficiency in building and scaling ETL pipelines. β€’ Hands-on experience working with leading cloud platforms (AWS, GCP, Azure, etc.). β€’ Exceptional written and verbal communication skills to articulate technical concepts with clarity. β€’ Proficient in scripting/programming languages (e.g., Python, Scala, or Java). β€’ Demonstrated ability to work independently in a remote collaboration environment. Preferred Qualifications: β€’ Prior experience as an AI trainer or in AI/ML prompt engineering projects. β€’ Advanced degree in Computer Science, Data Engineering, or related field. β€’ Industry certifications in cloud or big data technologies.