Confluent Kafka Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Confluent Kafka Engineer on a contract for over 6 months, paying up to $58.00 per hour. Requires strong expertise in Confluent Kafka, AWS services, IaC with Terraform, and relevant certifications. On-site in Owings Mills, MD.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
464
-
πŸ—“οΈ - Date discovered
September 18, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Owings Mills, MD 21117
-
🧠 - Skills detailed
#Documentation #AWS DevOps #Kubernetes #Monitoring #Lambda (AWS Lambda) #Data Engineering #Firewalls #Scripting #Containers #Visualization #Scala #"ETL (Extract #Transform #Load)" #DevOps #SaaS (Software as a Service) #Deployment #Requirements Gathering #RDS (Amazon Relational Database Service) #Java #AWS (Amazon Web Services) #Terraform #IAM (Identity and Access Management) #VPC (Virtual Private Cloud) #EC2 #Disaster Recovery #Ansible #Python #Security #Agile #Infrastructure as Code (IaC) #Public Cloud #S3 (Amazon Simple Storage Service) #Cloud #Programming #Shell Scripting #Prometheus #Grafana #Kafka (Apache Kafka) #Linux
Role description
Role Description The successful candidate will be responsible for developing and managing Infrastructure as Code (IaC), software development, continuous integration, system administration, and Linux operations. This role involves working with Confluent Kafka, Confluent Cloud, Schema Registry, KStreams, and technologies like Terraform and Kubernetes to develop and manage infrastructure-related code on the AWS platform. Responsibilities - Support systems engineering lifecycle activities for the Kafka platform, including requirements gathering, design, testing, implementation, operations, and documentation. - Automate platform management processes using tools like Ansible, Python, or other scripting languages. - Troubleshoot incidents affecting the Kafka platform. - Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs. - Develop documentation materials. - Participate in on-call rotations to address critical issues and ensure the reliability of data engineering systems. - Monitor, troubleshoot, and optimize the performance and reliability of Kafka in AWS environments. Experience - Ability to troubleshoot and diagnose complex issues, including internal and external SaaS/PaaS, and network flow troubleshooting. - Experience supporting technical users and conducting requirements analysis. - Capability to work independently with minimal guidance and oversight. - Familiarity with IT Service Management, including Incident & Problem Management. - Highly skilled in identifying performance bottlenecks, anomalous system behavior, and resolving the root causes of service issues. - Demonstrated ability to collaborate effectively across teams to influence the design, operations, and deployment of highly available software. - Knowledge of best practices related to security, performance, and disaster recovery. - Advanced understanding of agile methodologies such as CI/CD, application resiliency, and security. Required Technical Expertise - Develop and maintain a deep understanding of Kafka and its various components. - Strong knowledge of Kafka Connect, KSQL, and KStreams. - Experience designing and building secure Kafka, streaming, and messaging platforms at enterprise scale, including integration with other data systems in hybrid multi-cloud environments. - Hands-on experience with Confluent Kafka, Confluent Cloud, Schema Registry, and KStreams. - Proficiency in Infrastructure as Code (IaC) using tools like Terraform. - Strong operational background in running Kafka clusters at scale. - Familiarity with physical/on-premises systems and public cloud infrastructure. - Comprehensive understanding of Kafka broker, connection, topic tuning, and architectures. - Strong knowledge of Linux fundamentals as they relate to Kafka performance. - Background in both Systems and Software Engineering. - Strong understanding of containers and Kubernetes clusters. - Proven experience as a DevOps Engineer with a focus on AWS. - Proficient in AWS services like EC2, IAM, S3, RDS, Lambda, EKS, and VPC. - Working knowledge of networking concepts, including VPCs, Transit Gateways, firewalls, and load balancers. - Experience with monitoring and visualization tools such as Prometheus, Grafana, and Kibana. - Competent in developing new solutions in one or more high-level programming languages, including Java and Python. - Experience with configuration management in code/IaC, including Ansible and Terraform. - Hands-on experience delivering complex software in an enterprise environment. - A minimum of 3 years of experience in Python and Shell scripting. - At least 3 years of AWS DevOps experience. - Proficient in distributed Linux environments. Preferred Technical Experience - Certification in Confluent Kafka and/or Kubernetes is a plus. Job Types: Full-time, Contract Pay: Up to $58.00 per hour Education: Bachelor's (Required) Experience: designing and building secure Kafka: 6 years (Required) streaming, and messaging platforms at enterprise scale: 6 years (Required) Confluent Kafka: 6 years (Required) Confluent Cloud: 6 years (Required) Schema Registry: 6 years (Required) KStreams: 6 years (Required) AWS services : 6 years (Required) EC2, IAM, S3, RDS, Lambda, EKS, and VPC: 6 years (Required) Prometheus, Grafana, and Kibana: 6 years (Required) configuration management in code/IaC: 6 years (Required) Ansible and Terraform: 6 years (Required) License/Certification: Certification in Confluent Kafka and/or Kubernetes (Required) Ability to Commute: Owings Mills, MD 21117 (Required) Work Location: In person