

DevOps Kafka Confluent Architect
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a DevOps Kafka Confluent Architect, offering a 12+ month contract in a hybrid location (Dallas, TX/Chicago, IL) with a pay rate open for C2C. Key skills include CI/CD pipeline setup, Kubernetes, Confluent Kafka, and Terraform.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 11, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Corp-to-Corp (C2C)
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Kafka (Apache Kafka) #Python #Kubernetes #Vault #S3 (Amazon Simple Storage Service) #Cloud #Monitoring #Storage #Migration #Prometheus #SonarQube #GitHub #Deployment #Jenkins #Replication #Maven #Artifactory #Zookeeper #Strategy #Terraform #Virtualization #Infrastructure as Code (IaC) #Splunk #Java #Automation #Continuous Deployment #Data Replication #Ansible #AWS (Amazon Web Services) #DevOps
Role description
Confluent Kafka β CFK/MRC
Dallas, TX/ Chicago, IL (Hybrid)
12+ Months Contract
Open for C2C
Responsibilities:
β’ As a DevOps Architect, setup CI/CD pipeline for application deployment in Kubernetes
β’ Setup Confluent Kafka (MRC or CFK) across all the regions in hybrid environment
β’ Setup Rancher and downstream Kubernetes clusters in multiple environments and regions
β’ Setup IaC pipelines using TFE and Ansible to automate the Kafka cluster deployments
β’ Setup IaC pipelines using TFE and Ansible to automate the Rancher and downstream K8 cluster deployments
β’ Work in the DevOps team, to build new shared infrastructure services for on premises failover environment: S3, Confluent Kafka, Data Store
β’ Work with DevOps team to establish connectivity between new failover shared services and existing shared services: Secret, identity, LDA, DNS, Artifactory, Jenkins, Splunk services
β’ Work with DevOps team to automate deployment DR strategy Automation of data replication between Cloud and failover environment required between all applications and shared services CI/CD deployment tools
Must have:
β’ We need someone who's good at setting up CI/CD pipelines using DevOps tools like Jenkins, Artifactory, Confluent CFK, Kubernetes, Vault, SonarQube, GitHub, Terraform, Ansible, Rancher, and Harness. It's also important that DevOps Architect has experience with deploying the applications on Kubernetes using Rancher, Helm Charts and Harness. Plus, he/she should know about deploying Kafka MRC or CFK and the use of monitoring tools like Logic Monitor and Splunk. And it'd be great if he/she has experience with AWS services and on-prem as hybrid environment, especially setting up Confluent CFK or MRC, and know how to do S3 replication with Cloudian.
Kubernetes Administration:
β’ Helm
β’ CRDs
β’ K8 Operators
β’ All general things in K8 including PVCs
CFK Administration:
β’ Kafka
β’ Kraft/Zookeeper
β’ Kafka Connect
β’ Kafka Monitoring like consumer lag, broker health, topics growth, etc.
β’ Kafka Cluster Management (Topics, Partitions, Brokers, etc.)
β’ Kafka Backup Solution and Support
β’ Pilot Kafka data replication between AWS and On-Prem
β’ Experience with Rancher RKE2 K8s clusters and orchestration platform
β’ K8S cluster management, Monitoring, operational tools integration
β’ K8s cluster and secrets management integration
β’ K8s CSI plug-ins with object, file and block storage vendors
β’ Kubernetes backup solution and support
β’ S3/Cloudian object storage for shared infrastructure
β’ Specializing in Cloud Infrastructure Modernization, virtualization, data center setup, DR & BC Strategies, and DevOps
β’ Experience in on-premises data center operations, AWS hosted data center and operations management
β’ Hands on experience with migrations tools
β’ Experience with continuous deployment tools, techniques, and automation frameworks β especially Terraform Enterprise and Ansible.
β’ Hands-on experience writing testable scripts using Python or other languages.
β’ Experience managing helm charts and deploying into Kubernetes (K8s)
β’ Expertise with monitoring related tools and frameworks like Splunk, LogicMonitor, SignalFX, and Prometheus.
β’ Worked on projects involving deployment and management of micro services, and hybrid cloud/on-prem infrastructure
β’ Intermediate working knowledge of development tools like Maven/Gradle, Java, and distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc.
Good to Have
β’ Knowledge of setting up CI/CD pipelines for streaming Flink based applications is an added advantage
Confluent Kafka β CFK/MRC
Dallas, TX/ Chicago, IL (Hybrid)
12+ Months Contract
Open for C2C
Responsibilities:
β’ As a DevOps Architect, setup CI/CD pipeline for application deployment in Kubernetes
β’ Setup Confluent Kafka (MRC or CFK) across all the regions in hybrid environment
β’ Setup Rancher and downstream Kubernetes clusters in multiple environments and regions
β’ Setup IaC pipelines using TFE and Ansible to automate the Kafka cluster deployments
β’ Setup IaC pipelines using TFE and Ansible to automate the Rancher and downstream K8 cluster deployments
β’ Work in the DevOps team, to build new shared infrastructure services for on premises failover environment: S3, Confluent Kafka, Data Store
β’ Work with DevOps team to establish connectivity between new failover shared services and existing shared services: Secret, identity, LDA, DNS, Artifactory, Jenkins, Splunk services
β’ Work with DevOps team to automate deployment DR strategy Automation of data replication between Cloud and failover environment required between all applications and shared services CI/CD deployment tools
Must have:
β’ We need someone who's good at setting up CI/CD pipelines using DevOps tools like Jenkins, Artifactory, Confluent CFK, Kubernetes, Vault, SonarQube, GitHub, Terraform, Ansible, Rancher, and Harness. It's also important that DevOps Architect has experience with deploying the applications on Kubernetes using Rancher, Helm Charts and Harness. Plus, he/she should know about deploying Kafka MRC or CFK and the use of monitoring tools like Logic Monitor and Splunk. And it'd be great if he/she has experience with AWS services and on-prem as hybrid environment, especially setting up Confluent CFK or MRC, and know how to do S3 replication with Cloudian.
Kubernetes Administration:
β’ Helm
β’ CRDs
β’ K8 Operators
β’ All general things in K8 including PVCs
CFK Administration:
β’ Kafka
β’ Kraft/Zookeeper
β’ Kafka Connect
β’ Kafka Monitoring like consumer lag, broker health, topics growth, etc.
β’ Kafka Cluster Management (Topics, Partitions, Brokers, etc.)
β’ Kafka Backup Solution and Support
β’ Pilot Kafka data replication between AWS and On-Prem
β’ Experience with Rancher RKE2 K8s clusters and orchestration platform
β’ K8S cluster management, Monitoring, operational tools integration
β’ K8s cluster and secrets management integration
β’ K8s CSI plug-ins with object, file and block storage vendors
β’ Kubernetes backup solution and support
β’ S3/Cloudian object storage for shared infrastructure
β’ Specializing in Cloud Infrastructure Modernization, virtualization, data center setup, DR & BC Strategies, and DevOps
β’ Experience in on-premises data center operations, AWS hosted data center and operations management
β’ Hands on experience with migrations tools
β’ Experience with continuous deployment tools, techniques, and automation frameworks β especially Terraform Enterprise and Ansible.
β’ Hands-on experience writing testable scripts using Python or other languages.
β’ Experience managing helm charts and deploying into Kubernetes (K8s)
β’ Expertise with monitoring related tools and frameworks like Splunk, LogicMonitor, SignalFX, and Prometheus.
β’ Worked on projects involving deployment and management of micro services, and hybrid cloud/on-prem infrastructure
β’ Intermediate working knowledge of development tools like Maven/Gradle, Java, and distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc.
Good to Have
β’ Knowledge of setting up CI/CD pipelines for streaming Flink based applications is an added advantage