Sr. Flink/Kafka Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Flink/Kafka Engineer, offering a 12+ month contract with a pay rate of "unknown." It requires expertise in event-driven architectures, Apache Kafka, Apache Flink, Kubernetes, and public cloud platforms (AWS, GCP, Azure). Hybrid work in Concord, NC or Charlotte, NC.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
800
🗓️ - Date discovered
April 23, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Hybrid
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
San Francisco, CA
🧠 - Skills detailed
#Data Processing #GCP (Google Cloud Platform) #Infrastructure as Code (IaC) #ML (Machine Learning) #Grafana #Kubernetes #Prometheus #Python #Terraform #Scala #Azure #Kafka (Apache Kafka) #Deployment #Programming #Apache Kafka #Debugging #Data Science #AWS (Amazon Web Services) #Cloud #Data Pipeline #Public Cloud #Observability #API (Application Programming Interface) #Java
Role description

Please apply if you are ready to work on a W2. We can't do C2C/1099 for this opening.

Role: Sr. Flink Platform Engineer

Contact: 12+ months

Location options: Concord, CA / Charlotte, NC (Hybrid working model – 3 days each week onsite)

Alternative option - 2 days in San Francisco, CA, 1 day in Concord, CA, for the right candidate

We’re seeking a Senior Platform Engineer with deep expertise in event-driven architectures, particularly leveraging Apache Kafka and Apache Flink, to help design, build, and scale our next-generation streaming platform. You will be a technical leader responsible for driving the architecture and reliability of real-time data pipelines that power mission-critical services across the organization.

In this role, you’ll collaborate with software engineers, data scientists, and infrastructure teams to deliver robust, observable, and scalable streaming systems. You’ll also bring strong hands-on experience in Kubernetes and public cloud environments (AWS, GCP, or Azure) to optimize deployment, orchestration, and resilience.

Key Responsibilities:

Design and implement scalable, fault-tolerant streaming data platforms using Apache Kafka and Apache Flink.

Lead architectural decisions and define best practices for real-time data processing and delivery.

Develop and maintain self-service infrastructure patterns and tools to enable internal teams to consume, process, and produce streaming data effectively.

Optimize system performance, reliability, and observability in a Kubernetes-based environment.

Drive infrastructure as code practices and automate deployment workflows using tools like Terraform, Helm, and CI/CD pipelines.

Collaborate with data and engineering teams to support use cases across analytics, ML, and operational systems.

Champion platform reliability, scalability, and cost-efficiency across public cloud platforms (AWS, GCP, or Azure).

Mentor junior engineers and help shape the technical roadmap for the platform.

Required Qualifications:

7+ years of experience in backend/platform engineering with a strong focus on distributed systems.

Deep expertise in Apache Kafka (including Kafka Streams, Connect) and Apache Flink (DataStream API, state management, CEP, etc.).

Hands-on experience running and managing workloads in Kubernetes.

Solid experience with cloud-native technologies and services in AWS, Google Cloud, or Azure.

Strong programming skills in Java, Scala, or Python.

Proficiency with observability stacks (e.g., Prometheus, Grafana, OpenTelemetry) and debugging distributed systems.

Familiarity with infrastructure-as-code tools like Terraform, Pulumi, or similar.

Strong communication skills and the ability to drive technical initiatives across teams.

EEO:

“Mindlance is an Equal Opportunity Employer and does not discriminate in employment based on – Minority/Gender/Disability/Religion/LGBTQI/Age/Veterans.”