

Kafka Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Kafka Engineer on a long-term contract, fully remote, offering competitive pay. Candidates must have 10+ years of IT experience, 7+ years in data engineering, and expertise in Kafka, Java, Spring, AWS services, and RabbitMQ.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Oracle #ELB (Elastic Load Balancing) #Kafka (Apache Kafka) #JDBC (Java Database Connectivity) #Documentation #AWS (Amazon Web Services) #Python #Monitoring #VPN (Virtual Private Network) #REST API #Data Processing #API (Application Programming Interface) #Agile #Terraform #VPC (Virtual Private Cloud) #Distributed Computing #Scrum #Computer Science #SQL (Structured Query Language) #Data Engineering #REST (Representational State Transfer) #RDS (Amazon Relational Database Service) #Apache Kafka #SQL Server #EC2 #RDBMS (Relational Database Management System) #Data Modeling #Database Design #S3 (Amazon Simple Storage Service) #Databases #Java
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Kafka Engineer
Location: Fully Remote
Duration: Long-term contract
Candidates Must Have:
1. Kafka
1. Java
1. Spring
1. AWS services (EC2, ELB, RDS, Route53, S3, VPC, VPN, TGW);
1. RabbitMQ
Job Description:
β’ Minimum 10 years of IT experience
β’ Minimum of 7 years of hands-on experience within data engineering organization;
β’ Experience designing and operating solutions with relational Databases (Oracle, Postgres, SQL Server);
β’ Self-motivated and able to work with minimal supervision;
β’ Familiar with the creation of different types of documentation like functional, process, test plan etc;
β’ Experience in AWS MSK (Amazon Managed Streaming for Apache Kafka);
β’ Well-versed on MSK Connect and Kafka Connect for integration with RDBMS, using JDBC connectors;
β’ Hands-on experience in Terraform;
β’ Experience in primary AWS services (EC2, ELB, RDS, Route53, S3, VPC, VPN, TGW);
β’ Strong AWS skills;
β’ Strong track record of implementing AWS services in a variety of distributed computing environments;
β’ Ability to install, maintain and troubleshoot Kafka;
β’ Experience working with Apache Kafka, like designing, developing, and managing data streams, troubleshooting, internals, cluster design, optimization, monitoring, schema registry and schema evolution, REST API, and its many related components;
β’ Extensive experience with messaging and stream processing on Kafka;
β’ In-depth knowledge of all the functionalities surrounding Kafka;
β’ Excellent understanding of SQL and Python for data processing;
β’ Understanding of Data Modeling concepts and RDBMS Database design;
β’ Ability to work with a variety of platforms and application stacks;
β’ Experience working in an Agile/Scrum development process;
β’ Bachelorβs Degree in Computer Science or Software Engineering.