

Kafka Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Kafka Developer in Los Angeles, CA, on a long-term contract. Key skills include Kafka, data engineering, and real-time data processing (Kafka Streams/Spark/Flink). Experience in building scalable data pipelines and system integration is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Los Angeles, CA
-
π§ - Skills detailed
#Security #Data Processing #Kafka (Apache Kafka) #Spark (Apache Spark) #Scala #DevOps #Data Engineering #Data Pipeline
Role description
Job Title: Kafka Developer
Location: Los Angeles, CA (Onsite)
Duration: Long Term Contract
Job Description:
β’ We are seeking a skilled Kafka Developer to design, develop, and maintain real-time data streaming solutions.
β’ The ideal candidate should have strong experience in building scalable data pipelines, integrating Kafka with multiple systems, and ensuring high availability and performance of messaging systems.
Responsibilities:
β’ Design, develop, and deploy Kafka-based data streaming solutions.
β’ Build and manage Kafka topics, partitions, producers, and consumers.
β’ Integrate Kafka with external systems using Kafka Connect and custom connectors.
β’ Implement real-time data processing using Kafka Streams/Spark/Flink (as applicable).
β’ Ensure high availability, reliability, and scalability of the Kafka ecosystem.
β’ Monitor, troubleshoot, and optimize Kafka clusters for performance and security.
β’ Collaborate with cross-functional teams (data engineering, DevOps, and application developers).
Job Title: Kafka Developer
Location: Los Angeles, CA (Onsite)
Duration: Long Term Contract
Job Description:
β’ We are seeking a skilled Kafka Developer to design, develop, and maintain real-time data streaming solutions.
β’ The ideal candidate should have strong experience in building scalable data pipelines, integrating Kafka with multiple systems, and ensuring high availability and performance of messaging systems.
Responsibilities:
β’ Design, develop, and deploy Kafka-based data streaming solutions.
β’ Build and manage Kafka topics, partitions, producers, and consumers.
β’ Integrate Kafka with external systems using Kafka Connect and custom connectors.
β’ Implement real-time data processing using Kafka Streams/Spark/Flink (as applicable).
β’ Ensure high availability, reliability, and scalability of the Kafka ecosystem.
β’ Monitor, troubleshoot, and optimize Kafka clusters for performance and security.
β’ Collaborate with cross-functional teams (data engineering, DevOps, and application developers).