

Natobotics
Confluent /KAFKA Consulting Engineer(Remote UK)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a "Confluent/KAFKA Consulting Engineer" with a 6-month contract, paying competitive rates. It requires 5+ years of experience with Apache Kafka, strong Java/Python/Scala skills, and expertise in cloud deployments (AWS, GCP, Azure). Remote work is UK-based.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 16, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#GCP (Google Cloud Platform) #Data Lake #Monitoring #Migration #Data Warehouse #Grafana #Scala #Cloudera #Observability #Deployment #Splunk #Cloud #Data Pipeline #Data Quality #Python #REST (Representational State Transfer) #Automation #Kafka (Apache Kafka) #Java #Apache Kafka #DevOps #Docker #Data Engineering #Data Lineage #Azure #Big Data #Microservices #AWS (Amazon Web Services) #Kubernetes #Consulting #Prometheus
Role description
Confluent Consulting Engineer
Work Mode: Remote (UK-based)
Contract Duration: 6 months
Role Overview
We are looking for experienced Confluent Consulting Engineers to design, develop, and maintain scalable real-time data pipelines and integrations using Kafka and Confluent components. You’ll work closely with data engineers, solution architects, and DevOps teams to deliver high-performance streaming solutions across cloud environments.
Key Responsibilities
Design and implement real-time data streaming solutions using Apache Kafka and Confluent Platform.
Develop, optimize, and maintain event-driven architectures and microservices integrations.
Collaborate with cross-functional teams to ensure scalability, resilience, and data quality.
Deploy and manage Kafka clusters on AWS, Azure, or GCP.
Contribute to CI/CD pipelines, observability, and infrastructure automation.
Troubleshoot performance bottlenecks and support production streaming environments.
Must-Have Skills
5+ years of hands-on experience with Apache Kafka (open-source, Confluent, Cloudera, or AWS MSK).
Strong proficiency in Java, Python, or Scala.
Solid understanding of event-driven architecture and data streaming patterns.
Experience deploying Kafka on cloud platforms (AWS, GCP, Azure).
Familiarity with Docker, Kubernetes, and CI/CD pipelines.
Excellent problem-solving, analytical, and communication skills.
Desired Skills (Preferred)
Hands-on experience with Confluent Kafka ecosystem, including:
Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center
Confluent Cloud services: ksqlDB Cloud, Apache Flink
Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC
Confluent certifications (Developer, Administrator, or Flink Developer).
Experience with Confluent Platform, Confluent Cloud managed services, and multi-cloud deployments.
Knowledge of data mesh architectures, KRaft migration, and modern event streaming patterns.
Exposure to Prometheus, Grafana, or Splunk for monitoring.
Experience working with data lakes, data warehouses, or big data ecosystems.
Personal Attributes
Strong analytical and problem-solving abilities.
High initiative, adaptability, and flexibility.
Excellent customer orientation and quality focus.
Strong verbal and written communication skills.
Experience Required
Minimum: 5 years relevant experience (10 years total professional experience preferred).
Confluent Consulting Engineer
Work Mode: Remote (UK-based)
Contract Duration: 6 months
Role Overview
We are looking for experienced Confluent Consulting Engineers to design, develop, and maintain scalable real-time data pipelines and integrations using Kafka and Confluent components. You’ll work closely with data engineers, solution architects, and DevOps teams to deliver high-performance streaming solutions across cloud environments.
Key Responsibilities
Design and implement real-time data streaming solutions using Apache Kafka and Confluent Platform.
Develop, optimize, and maintain event-driven architectures and microservices integrations.
Collaborate with cross-functional teams to ensure scalability, resilience, and data quality.
Deploy and manage Kafka clusters on AWS, Azure, or GCP.
Contribute to CI/CD pipelines, observability, and infrastructure automation.
Troubleshoot performance bottlenecks and support production streaming environments.
Must-Have Skills
5+ years of hands-on experience with Apache Kafka (open-source, Confluent, Cloudera, or AWS MSK).
Strong proficiency in Java, Python, or Scala.
Solid understanding of event-driven architecture and data streaming patterns.
Experience deploying Kafka on cloud platforms (AWS, GCP, Azure).
Familiarity with Docker, Kubernetes, and CI/CD pipelines.
Excellent problem-solving, analytical, and communication skills.
Desired Skills (Preferred)
Hands-on experience with Confluent Kafka ecosystem, including:
Kafka Connect, Kafka Streams, KSQL, Schema Registry, REST Proxy, Confluent Control Center
Confluent Cloud services: ksqlDB Cloud, Apache Flink
Stream Governance, Data Lineage, Stream Catalog, Audit Logs, RBAC
Confluent certifications (Developer, Administrator, or Flink Developer).
Experience with Confluent Platform, Confluent Cloud managed services, and multi-cloud deployments.
Knowledge of data mesh architectures, KRaft migration, and modern event streaming patterns.
Exposure to Prometheus, Grafana, or Splunk for monitoring.
Experience working with data lakes, data warehouses, or big data ecosystems.
Personal Attributes
Strong analytical and problem-solving abilities.
High initiative, adaptability, and flexibility.
Excellent customer orientation and quality focus.
Strong verbal and written communication skills.
Experience Required
Minimum: 5 years relevant experience (10 years total professional experience preferred).