Confluent Kafka/Connector

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Confluent Kafka/Connector specialist for a contract of "X months" at a pay rate of "$Y/hour". Key skills required include hands-on experience with the Confluent Kafka ecosystem, HTTP connectors, and proficiency in Java/Python.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 20, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #AWS (Amazon Web Services) #Debugging #Java #Microservices #S3 (Amazon Simple Storage Service) #Scala #Cloud #Kubernetes #REST (Representational State Transfer) #Monitoring #JDBC (Java Database Connectivity) #Python #Azure #Consulting #GCP (Google Cloud Platform) #REST API #Docker
Role description
Responsibilities β€’ Design, configure, and maintain Confluent Kafka clusters. β€’ Implement and manage HTTP source/sink connectors for real-time data flow. β€’ Troubleshoot and optimize Kafka connectors and integrations with REST APIs. β€’ Collaborate with teams to build scalable, event-driven architectures. β€’ Provide best practices and consulting support on Confluent platform. Required Skills β€’ Hands-on experience with Confluent Kafka ecosystem (Kafka Connect, Schema Registry, KSQL). β€’ Strong expertise in HTTP connectors (source/sink). β€’ Solid knowledge of distributed systems & event streaming. β€’ Experience in Java/Python for debugging and customization. β€’ Familiarity with cloud (AWS/Azure/GCP) and Docker/Kubernetes. Nice to Have β€’ Experience with other connectors (JDBC, S3, Debezium). β€’ Knowledge of Confluent Control Center & monitoring tools. β€’ Exposure to microservices architecture.