Crossing Hurdles

Data Engineer | $60/hr Remote

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position, offering $30–$60/hr on a flexible contract (10–40 hrs/week) remotely. Key skills include data engineering, Hadoop, Spark, Kafka, and cloud platforms. Preferred experience includes AI training and relevant certifications.
🌎 - Country
United Kingdom
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
December 20, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Big Data #Spark (Apache Spark) #Data Architecture #"ETL (Extract #Transform #Load)" #Hadoop #Data Engineering #Datasets #Data Pipeline #Data Quality #ML (Machine Learning) #Security #AI (Artificial Intelligence) #AWS (Amazon Web Services) #Scala #Data Science #Azure #Kafka (Apache Kafka) #Cloud #GCP (Google Cloud Platform) #Data Security #Computer Science
Role description
At Crossing Hurdles, we work as a referral partner. We refer candidates to our client that collaborates with the world’s leading AI research labs to build and train cutting-edge AI models. Position: Data Engineer – AI Trainer Type: Contract Compensation: $30–$60/hr Location: Remote Duration: 10–40 hrs/week, flexible and asynchronous Requirements: (Training support will be provided) • Strong experience in data engineering and large-scale data systems • Hands-on expertise with big data technologies such as Hadoop and Spark • Experience building and maintaining scalable data pipelines (ETL/ELT) • Proficiency with real-time data streaming tools like Kafka • Experience working with cloud platforms (AWS, GCP, Azure, or similar) • Familiarity with AI/LLM applications, data curation, and prompt engineering • Strong problem-solving and troubleshooting skills in distributed systems • Excellent written and verbal communication skills • Comfortable collaborating in fully remote, cross-functional teams • Ability to work independently and manage tasks asynchronously Preferred: • Prior experience working as an AI Trainer or on AI/ML-focused projects • Exposure to generative AI systems and LLM-driven data workflows • Advanced degree in Computer Science, Data Engineering, or related field • Cloud or big data certifications (AWS, GCP, Azure, Hadoop, Spark, etc.) • Experience documenting technical workflows for training or onboarding Role Responsibilities: • Design, develop, and optimize large-scale data pipelines using Hadoop and Spark • Build and maintain robust data architectures to support AI model training • Integrate and manage real-time data streams using Kafka • Deploy, orchestrate, and monitor distributed data workloads on cloud platforms • Collaborate with data scientists and ML engineers to support AI initiatives • Curate and prepare high-quality datasets for AI and LLM training use cases • Document data workflows, pipelines, and best practices clearly • Ensure data security, scalability, performance, and reliability across systems • Support AI training efforts by validating data quality and pipeline outputs Application process: (Takes 7–30 mins) • Upload resume • AI interview (camera on, skill-based questions; coding for engineers) • Submit form