Kafka Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Kafka Engineer with a 6-month hybrid contract in Owings Mills, MD. Key skills include Kafka, AWS, Python, and Ansible. Requires 3+ years of AWS DevOps and Python experience. Certification in Confluent Kafka is a plus.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
May 28, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Owings Mills, MD
-
🧠 - Skills detailed
#AWS DevOps #Grafana #Data Engineering #Firewalls #Public Cloud #Documentation #Security #Monitoring #Cloud #Lambda (AWS Lambda) #Java #Scripting #Terraform #SaaS (Software as a Service) #Automation #Infrastructure as Code (IaC) #Deployment #Disaster Recovery #Containers #IAM (Identity and Access Management) #GIT #RDS (Amazon Relational Database Service) #"ETL (Extract #Transform #Load)" #EC2 #VPC (Virtual Private Cloud) #Prometheus #AWS (Amazon Web Services) #Scala #Agile #Linux #Shell Scripting #Requirements Gathering #Kafka (Apache Kafka) #DevOps #Kubernetes #Ansible #Python #S3 (Amazon Simple Storage Service)
Role description
Initial Assignment Duration: 6 months Work Location: Hybrid onsite required on Monday / Tuesday, Owing Mills, MD INTERVIEW PROCESS: 30 min preliminary screening and then second round they would like to see in person Core Skills: Kafka platform and AWS Cloud, Git needed work as well in Agile project mgmt. and document process etc. Python for scripting and Ansible for automation. Kafka data streaming as well. Kafka Engineering Job Description Role Description β€’ The successful candidate will be responsible for developing and managing infrastructure as code (IaC), software development, continuous integration, system administration, and Linux. β€’ The candidate will be working with Confluent Kafka, Confluent cloud, Schema Registry, KStreams, and technologies like Terraform and Kubernetes to develop and manage infrastructure-related code on AWS platform. Responsibilities β€’ Support systems engineering lifecycle activities for Kafka platform, including requirements gathering, design, testing, implementation, operations, and documentation. β€’ Automating platform management processes through Ansible, Python or other scripting tools/languages . β€’ Troubleshooting incidents impacting the Kafka platform. β€’ Collaborate with cross-functional teams to understand data requirements and design scalable solutions that meet business needs. β€’ Develop documentation materials. β€’ Participate in on-call rotations to address critical issues and ensure the reliability of data engineering systems. β€’ Monitor, troubleshoot, and optimize the performance and reliability of Kafka in AWS environments. Experience β€’ Ability to troubleshoot and diagnose complex issues (e.g. including internal and external SaaS/PaaS, troubleshooting network flows). β€’ Able to demonstrate experience supporting technical users and conduct requirements analysis β€’ Can work independently with minimal guidance & oversight. β€’ Experience with IT Service Management and familiarity with Incident & Problem management β€’ Highly skilled in identifying performance bottlenecks, identifying anomalous system behavior, and resolving root cause of service issues. β€’ Demonstrated ability to effectively work across teams and functions to influence design, operations, and deployment of highly available software. β€’ Knowledge of standard methodologies related to security, performance, and disaster recovery β€’ Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security. Required Technical Expertise β€’ Develop and maintain a deep understanding of Kafka and its various components. β€’ Strong Knowledge in Kafka Connect, KSQL and KStreams. β€’ Implementation experience in designing and building secure Kafka/streaming/messaging platform at enterprise scale and integration with other data system in hybrid multi-cloud environment. β€’ Experience in working with Confluent Kafka, Confluent Cloud, Schema Registry, and KStreams Infrastructure as code (IaC) using tools like Terraform. β€’ Strong operational background running Kafka clusters at scale. β€’ Knowledge of both physical/onprem systems and public cloud infrastructure. β€’ Strong understanding of Kafka broker, connect, and topic tuning and architectures. β€’ Strong understanding of Linux fundamentals as related to Kafka performance. β€’ Background in both Systems and Software Engineering. β€’ Strong understanding and working knowledge, experience of containers and Kubernetes cluster. β€’ Proven experience as a DevOps Engineer with a focus on AWS. β€’ Strong proficiency in AWS services such as EC2, IAM, S3, RDS, Lambda , EKS and VPC. Working knowledge of networking - VPCs, Transit Gateways, firewalls, load balancers, etc. β€’ Experience in monitoring and visualizing tools like Prometheus, Grafana, Kibana. β€’ Competent developing new solutions in one or more of high-level language Java, Python. β€’ Competent with configuration management in code/IaC including Ansible and Terraform β€’ Hands on experience delivering complex software in an enterprise environment. β€’ 3+ years of Python and Shell Scripting. β€’ 3+ years of AWS DevOps experience. β€’ Proficiency in distributed Linux environments. Preferred Technical Experience β€’ Certification in Confluent Kafka and/or Kubernetes is a plus