Barclay Simpson

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 7+ years in software engineering, including 4+ years in data engineering. Contract length is unspecified, pay rate is "unknown," and work location is "remote." Key skills include SQL, NoSQL, and cloud services.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
800
-
🗓️ - Date
November 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#Spark (Apache Spark) #Redshift #SQL (Structured Query Language) #Data Pipeline #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #Data Science #Kafka (Apache Kafka) #Storage #NoSQL #Security #Cloud #Data Lifecycle #Data Quality #Cybersecurity #AI (Artificial Intelligence) #Java #S3 (Amazon Simple Storage Service) #Programming #Python #Visualization #Airflow #Databases #Scala #Data Engineering #Data Framework
Role description
We're seeking an experienced Data Engineer to design, build, and scale robust data systems and pipelines for an innovative AI based start up. You'll shape the data infrastructure from the ground up, driving innovation in how data is collected, processed, and utilized for cybersecurity solutions. Responsibilities: • Design and maintain scalable, secure data pipelines and architectures. • Own the full data lifecycle-from ingestion and storage to processing and visualization. • Collaborate with engineers, data scientists, and product teams to support product and analytical needs. • Monitor and optimize data performance, scalability, and reliability. • Define and enforce data quality standards and best practices. • Rapidly prototype and iterate on new data solutions. • Mentor junior engineers and contribute to technical reviews. Requirements: • 7+ years in software engineering, including 4+ years in data engineering. • Strong experience with data frameworks (eg, Spark, Kafka, Airflow) and ETL workflows. • Proficiency with SQL and NoSQL databases, including query optimization. • Experience with cloud data services (eg, AWS Redshift, S3, Glue, EMR) and CI/CD for data pipelines. • Strong programming skills in Python, Java, or Scala. • Excellent problem-solving and collaboration skills. • Ability to thrive in a fast-paced, dynamic environment. Why You'll Love This Role: • Tackle complex, large-scale data challenges in cybersecurity. • Work with a team of experienced engineers and technical leaders. • Make a real impact by enabling proactive threat detection and risk mitigation.