
Certified Kafka Engineer
β - Featured Role | Apply direct with Data Freelance Hub
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
496
-
ποΈ - Date discovered
September 16, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Remote
-
π§ - Skills detailed
#Aurora #Grafana #Replication #AWS Databases #Data Privacy #Deployment #Documentation #Security #Terraform #API (Application Programming Interface) #Docker #Migration #GDPR (General Data Protection Regulation) #Alation #RDS (Amazon Relational Database Service) #Python #Data Pipeline #Prometheus #Kerberos #Cloud #IAM (Identity and Access Management) #Strategy #Datadog #Data Integrity #Jira #Automation #Logging #GCP (Google Cloud Platform) #Infrastructure as Code (IaC) #Databases #Apache Kafka #Java #Programming #Computer Science #PCI (Payment Card Industry) #Kubernetes #Kafka (Apache Kafka) #Cybersecurity #Observability #EC2 #Monitoring #Database Management #DevSecOps #AWS (Amazon Web Services) #Scala #Data Engineering #Compliance #Azure
Role description
ADSJP00001939
β’
β’
β’ Kafka Platform Engineer
β’
β’
β’ LOCATION: Remote, but able to work EST hours and within 60 miles
Draper, UT - 12921 Vista Station Blvd, Suite 100
Columbus, OH - 3095 Loyalty Circle
Plano, TX - 7500 Dallas Parkway Suite 700
Chadds Ford, PA - 5 Hillman Drive, Suite 102
Wilmington, DE - One Righter Pkwy., Suite 100
NYC (Manhattan)- 156 5th Avenue Floor 2
Working hours: 9-5Temp to Perm
Top 3 Must-Haves (Hard and/or Soft Skills):1. Kafka & Confluent Cloud Expertise
Deep understanding of Kafka architecture and Confluent Cloud services.
Experience with Kafka Connect, Schema Registry, and stream processing.
1. AWS Infrastructure & Database Management
Hands-on experience with AWS services like RDS, Aurora, EC2, IAM, and networking.
Ability to integrate Kafka with AWS-hosted databases and troubleshoot cloud-native issues.
1. Terraform & Infrastructure Automation
Proficiency in Terraform for provisioning Kafka clusters, AWS resources, and managing infrastructure as code.
Familiarity with GitOps workflows and CI/CD pipelines.
Top 3 Nice-To-Haves (Hard and/or Soft Skills)Degree Requirements (Experience in Lieu of Degree): 1. Monitoring & Observability
Experience with tools like Prometheus, Grafana, Datadog, or Confluent Metrics API.
Ability to set up alerting and dashboards for Kafka and cloud services.
1. Security & Governance
Knowledge of RBAC, encryption, and audit logging in Confluent Cloud and AWS.
Experience implementing secure data pipelines and compliance controls.
1. Strong Collaboration & Incident Response
Ability to work cross-functionally with data engineers, SREs, and developers.
Skilled in communicating during outages, postmortems, and planning sessions.
Certification Requirements (Any Preferences): 1. Confluent Certified Developer for Apache Kafka
Validates deep understanding of Kafka architecture, APIs, and Confluent tooling.
Ideal for engineers building and managing Kafka-based data pipelines.
1. AWS Certified Solutions Architect β Associate
Demonstrates strong knowledge of AWS services, networking, and architecture best practices.
Especially useful for integrating Kafka with AWS-hosted databases and services.
1. HashiCorp Certified: Terraform Associate
Confirms proficiency in infrastructure as code, Terraform modules, and cloud provisioning.
Valuable for managing Kafka infrastructure and AWS resources declaratively.
The Kafka Platform Engineer designs, implements, and supports scalable, secure Kafka-based messaging pipelines that power real-time communication between critical systems such as credit, loan applications, and fraud services. This role focuses on improving the resiliency, reliability, and operations of our Kafka platform in a highly regulated financial environment. The Kafka Platform Engineer partners closely with engineering and platform teams to support the migration from on-prem to AWS and ensure seamless integration across systems.Essential Job Functions
Regularly check cloud services for performance issues and recency and optimize as needed. Configure and manage user permissions and roles to ensure secure access to cloud resources. Develop and maintain backup strategies to ensure data integrity and availability. Maintain detailed records of system configurations and changes for compliance and troubleshooting. - (25%)
Write and maintain scripts for automated deployment processes. Ensure automated tests are part of the CI/CD pipeline to catch issues early. Track deployment progress and resolve any issues that arise during the process. Work closely with developers to ensure smooth integration of new code into production. Continuously improve deployment processes to reduce downtime and increase efficiency. - (25%)
Set up and configure tools to monitor cloud infrastructure and applications. Develop dashboards for real-time monitoring and set up alerts for critical issues. Regularly review monitoring data to identify trends and potential issues. Provide regular reports on system performance and health to stakeholders. Continuously improve monitoring solutions to cover new services and technologies. - (20%)
Organize meetings to gather requirements from various teams for cloud projects. Ensure alignment between development, network, and security teams on cloud initiatives. Mediate and resolve any conflicts or discrepancies in requirements or priorities. Keep detailed records of discussions and decisions made during meetings. Ensure that all agreed-upon actions are completed in a timely manner. - (15%)
Regularly review resource usage to identify areas for optimization. Predict future resource requirements based on current trends and business growth. Create plans for scaling resources up or down based on demand. Ensure that resources are allocated efficiently to avoid waste and reduce costs. Continuously review and adjust capacity plans to reflect changes in business needs or technology. - (15%)
Minimum Qualifications
Bachelorβs Degree in Information Technology, Computer Science, Engineering or related field or equivalent, relevant work experience
At least 1 platform specific certification (AWS, Azure, GCP, DevSecOps, Apache Kafka).
2+ years of relevant experience working across areas of the Platform engineering.
2+ years of experience of cloud services and understanding of infrastructure as code (IaC) tools like Terraform or AWS CloudFormation.
Preferred Qualifications
5+ years of cloud engineering experience, particularly in designing and implementing cloud platform solutions.
3+ years of experience with Apache Kafka in highly regulated, mission-critical environments (preferably finance or banking).
Strong understanding of Kafka internals and distributed systems.
Proficiency in Java, Scala, or Python for building Kafka producers, consumers, and stream processors.
Experience with Kafka Connect, Schema Registry (Avro), and Kafka Streams.
Hands-on experience with containerization (Docker, Kubernetes) and CI/CD pipelines.
Familiarity with securing Kafka using Kerberos, SSL, ACLs, and integration with IAM systems.
Solid understanding of financial transaction systems, messaging standards, and data privacy regulations (e.g., SOX, PCI-DSS, GDPR).
Skills
Programming Languages
Cloud Services Management
CI/CD
Configuration Management (CM)
Infrastructure As Code (IaC)
DevSecOps
Monitoring Solutions
IT Capacity Planning
Security Management
Technical Communication
Cloud Deployment
What would βa day in the lifeβ of this role look like?A typical day might include:
Morning Check-ins:
Reviewing system health dashboards and alerts.
Checking in with direct reports or team leads on ongoing issues or overnight incidents.
Team Collaboration:
Leading or attending stand-ups with infrastructure, network, and operations teams.
Coordinating with cybersecurity, application development, and support teams.
Strategic Planning & Execution:
Reviewing infrastructure roadmaps and project timelines.
Evaluating vendor performance and contract renewals.
Approving changes and reviewing architecture proposals.
Infrastructure Work:
Use Terraform to provision or update Kafka topics, connectors, or AWS resources.
Troubleshoot Kafka Connect integrations with AWS databases (e.g., RDS, Aurora).
Optimize throughput, latency, and schema evolution.
Documentation:
Update Confluence pages with:
Architecture diagrams.
Runbooks for incident response.
Kafka topic naming conventions and retention policies.
Document changes made via Terraform and link to Jira tickets
Stakeholder Engagement:
Meeting with business units to understand upcoming needs.
Problem Solving & Escalations:
Handling escalated technical issues or outages.
Making decisions on resource allocation and prioritization.
What interaction level with this role have the team members and hiring manager?High interaction, especially with infrastructure engineers, network admins, project managers, and application owners Expect daily or near-daily engagement.What would you say is the top priority for the worker over the first few weeks/months? Top Priorities: First Few Weeks1. Understand the Existing Kafka Ecosystem
Review current Confluent Cloud setup: clusters, topics, connectors, schemas.
Learn naming conventions, retention policies, and consumer group strategies.
Familiarize with Terraform modules used for Kafka provisioning.
1. Gain Visibility into Data Flows & Integrations
Map out how Kafka interacts with AWS databases (e.g., RDS, Aurora).
Identify key producers and consumers, and their SLAs.
Review monitoring tools and alerting thresholds.
1. Review Documentation & Jira Backlog
Read existing Confluence documentation: architecture diagrams, runbooks, onboarding guides.
Review open Jira tickets to understand current pain points, priorities, and upcoming work.
Begin contributing to documentation updates and ticket grooming.
Top Priorities: First Few Months1. Stabilize & Optimize Kafka Infrastructure
Address any performance bottlenecks or reliability issues.
Tune configurations for throughput, replication, and retention.
Ensure Terraform modules are clean, reusable, and version-controlled.
1. Improve Automation & Observability
Enhance Terraform automation for Kafka and AWS provisioning.
Set up or refine dashboards and alerts for Kafka health and data pipeline performance.
1. Collaborate & Enable Teams
Work with data engineers and developers to onboard new use cases.
Provide guidance on schema evolution, topic design, and connector usage.
Participate in sprint planning and contribute to long-term platform strategy.
What do you foresee being the biggest challenge in this role?
β’ Balancing Legacy and Innovation: Understand legacy
Job Types: Full-time, Contract
Pay: $55.00 - $62.00 per hour
License/Certification:
CCAAK (Required)
Kafka Certified (Required)
Work Location: Remote
ADSJP00001939
β’
β’
β’ Kafka Platform Engineer
β’
β’
β’ LOCATION: Remote, but able to work EST hours and within 60 miles
Draper, UT - 12921 Vista Station Blvd, Suite 100
Columbus, OH - 3095 Loyalty Circle
Plano, TX - 7500 Dallas Parkway Suite 700
Chadds Ford, PA - 5 Hillman Drive, Suite 102
Wilmington, DE - One Righter Pkwy., Suite 100
NYC (Manhattan)- 156 5th Avenue Floor 2
Working hours: 9-5Temp to Perm
Top 3 Must-Haves (Hard and/or Soft Skills):1. Kafka & Confluent Cloud Expertise
Deep understanding of Kafka architecture and Confluent Cloud services.
Experience with Kafka Connect, Schema Registry, and stream processing.
1. AWS Infrastructure & Database Management
Hands-on experience with AWS services like RDS, Aurora, EC2, IAM, and networking.
Ability to integrate Kafka with AWS-hosted databases and troubleshoot cloud-native issues.
1. Terraform & Infrastructure Automation
Proficiency in Terraform for provisioning Kafka clusters, AWS resources, and managing infrastructure as code.
Familiarity with GitOps workflows and CI/CD pipelines.
Top 3 Nice-To-Haves (Hard and/or Soft Skills)Degree Requirements (Experience in Lieu of Degree): 1. Monitoring & Observability
Experience with tools like Prometheus, Grafana, Datadog, or Confluent Metrics API.
Ability to set up alerting and dashboards for Kafka and cloud services.
1. Security & Governance
Knowledge of RBAC, encryption, and audit logging in Confluent Cloud and AWS.
Experience implementing secure data pipelines and compliance controls.
1. Strong Collaboration & Incident Response
Ability to work cross-functionally with data engineers, SREs, and developers.
Skilled in communicating during outages, postmortems, and planning sessions.
Certification Requirements (Any Preferences): 1. Confluent Certified Developer for Apache Kafka
Validates deep understanding of Kafka architecture, APIs, and Confluent tooling.
Ideal for engineers building and managing Kafka-based data pipelines.
1. AWS Certified Solutions Architect β Associate
Demonstrates strong knowledge of AWS services, networking, and architecture best practices.
Especially useful for integrating Kafka with AWS-hosted databases and services.
1. HashiCorp Certified: Terraform Associate
Confirms proficiency in infrastructure as code, Terraform modules, and cloud provisioning.
Valuable for managing Kafka infrastructure and AWS resources declaratively.
The Kafka Platform Engineer designs, implements, and supports scalable, secure Kafka-based messaging pipelines that power real-time communication between critical systems such as credit, loan applications, and fraud services. This role focuses on improving the resiliency, reliability, and operations of our Kafka platform in a highly regulated financial environment. The Kafka Platform Engineer partners closely with engineering and platform teams to support the migration from on-prem to AWS and ensure seamless integration across systems.Essential Job Functions
Regularly check cloud services for performance issues and recency and optimize as needed. Configure and manage user permissions and roles to ensure secure access to cloud resources. Develop and maintain backup strategies to ensure data integrity and availability. Maintain detailed records of system configurations and changes for compliance and troubleshooting. - (25%)
Write and maintain scripts for automated deployment processes. Ensure automated tests are part of the CI/CD pipeline to catch issues early. Track deployment progress and resolve any issues that arise during the process. Work closely with developers to ensure smooth integration of new code into production. Continuously improve deployment processes to reduce downtime and increase efficiency. - (25%)
Set up and configure tools to monitor cloud infrastructure and applications. Develop dashboards for real-time monitoring and set up alerts for critical issues. Regularly review monitoring data to identify trends and potential issues. Provide regular reports on system performance and health to stakeholders. Continuously improve monitoring solutions to cover new services and technologies. - (20%)
Organize meetings to gather requirements from various teams for cloud projects. Ensure alignment between development, network, and security teams on cloud initiatives. Mediate and resolve any conflicts or discrepancies in requirements or priorities. Keep detailed records of discussions and decisions made during meetings. Ensure that all agreed-upon actions are completed in a timely manner. - (15%)
Regularly review resource usage to identify areas for optimization. Predict future resource requirements based on current trends and business growth. Create plans for scaling resources up or down based on demand. Ensure that resources are allocated efficiently to avoid waste and reduce costs. Continuously review and adjust capacity plans to reflect changes in business needs or technology. - (15%)
Minimum Qualifications
Bachelorβs Degree in Information Technology, Computer Science, Engineering or related field or equivalent, relevant work experience
At least 1 platform specific certification (AWS, Azure, GCP, DevSecOps, Apache Kafka).
2+ years of relevant experience working across areas of the Platform engineering.
2+ years of experience of cloud services and understanding of infrastructure as code (IaC) tools like Terraform or AWS CloudFormation.
Preferred Qualifications
5+ years of cloud engineering experience, particularly in designing and implementing cloud platform solutions.
3+ years of experience with Apache Kafka in highly regulated, mission-critical environments (preferably finance or banking).
Strong understanding of Kafka internals and distributed systems.
Proficiency in Java, Scala, or Python for building Kafka producers, consumers, and stream processors.
Experience with Kafka Connect, Schema Registry (Avro), and Kafka Streams.
Hands-on experience with containerization (Docker, Kubernetes) and CI/CD pipelines.
Familiarity with securing Kafka using Kerberos, SSL, ACLs, and integration with IAM systems.
Solid understanding of financial transaction systems, messaging standards, and data privacy regulations (e.g., SOX, PCI-DSS, GDPR).
Skills
Programming Languages
Cloud Services Management
CI/CD
Configuration Management (CM)
Infrastructure As Code (IaC)
DevSecOps
Monitoring Solutions
IT Capacity Planning
Security Management
Technical Communication
Cloud Deployment
What would βa day in the lifeβ of this role look like?A typical day might include:
Morning Check-ins:
Reviewing system health dashboards and alerts.
Checking in with direct reports or team leads on ongoing issues or overnight incidents.
Team Collaboration:
Leading or attending stand-ups with infrastructure, network, and operations teams.
Coordinating with cybersecurity, application development, and support teams.
Strategic Planning & Execution:
Reviewing infrastructure roadmaps and project timelines.
Evaluating vendor performance and contract renewals.
Approving changes and reviewing architecture proposals.
Infrastructure Work:
Use Terraform to provision or update Kafka topics, connectors, or AWS resources.
Troubleshoot Kafka Connect integrations with AWS databases (e.g., RDS, Aurora).
Optimize throughput, latency, and schema evolution.
Documentation:
Update Confluence pages with:
Architecture diagrams.
Runbooks for incident response.
Kafka topic naming conventions and retention policies.
Document changes made via Terraform and link to Jira tickets
Stakeholder Engagement:
Meeting with business units to understand upcoming needs.
Problem Solving & Escalations:
Handling escalated technical issues or outages.
Making decisions on resource allocation and prioritization.
What interaction level with this role have the team members and hiring manager?High interaction, especially with infrastructure engineers, network admins, project managers, and application owners Expect daily or near-daily engagement.What would you say is the top priority for the worker over the first few weeks/months? Top Priorities: First Few Weeks1. Understand the Existing Kafka Ecosystem
Review current Confluent Cloud setup: clusters, topics, connectors, schemas.
Learn naming conventions, retention policies, and consumer group strategies.
Familiarize with Terraform modules used for Kafka provisioning.
1. Gain Visibility into Data Flows & Integrations
Map out how Kafka interacts with AWS databases (e.g., RDS, Aurora).
Identify key producers and consumers, and their SLAs.
Review monitoring tools and alerting thresholds.
1. Review Documentation & Jira Backlog
Read existing Confluence documentation: architecture diagrams, runbooks, onboarding guides.
Review open Jira tickets to understand current pain points, priorities, and upcoming work.
Begin contributing to documentation updates and ticket grooming.
Top Priorities: First Few Months1. Stabilize & Optimize Kafka Infrastructure
Address any performance bottlenecks or reliability issues.
Tune configurations for throughput, replication, and retention.
Ensure Terraform modules are clean, reusable, and version-controlled.
1. Improve Automation & Observability
Enhance Terraform automation for Kafka and AWS provisioning.
Set up or refine dashboards and alerts for Kafka health and data pipeline performance.
1. Collaborate & Enable Teams
Work with data engineers and developers to onboard new use cases.
Provide guidance on schema evolution, topic design, and connector usage.
Participate in sprint planning and contribute to long-term platform strategy.
What do you foresee being the biggest challenge in this role?
β’ Balancing Legacy and Innovation: Understand legacy
Job Types: Full-time, Contract
Pay: $55.00 - $62.00 per hour
License/Certification:
CCAAK (Required)
Kafka Certified (Required)
Work Location: Remote