Infobahn Solutions Inc.

Confluent/Kafka Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Confluent/Kafka Developer on a US Federal Government project, offering a contract length of over 6 months and a pay rate of $95,000 - $140,000 per year. Key skills include Kafka, Confluent Platform, Change Data Capture, Java, and AWS.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
636
-
πŸ—“οΈ - Date
November 27, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Remote
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Delta Lake #Programming #Data Ingestion #Scala #Automation #Deployment #Monitoring #Data Integration #Grafana #Kafka (Apache Kafka) #Databases #"ETL (Extract #Transform #Load)" #Terraform #JDBC (Java Database Connectivity) #GIT #Scripting #Microservices #JSON (JavaScript Object Notation) #Computer Science #Databricks #Data Engineering #Data Pipeline #Spark (Apache Spark) #Apache Kafka #YAML (YAML Ain't Markup Language) #Snowflake #Data Processing #Linux #AWS (Amazon Web Services) #DevOps #Data Reconciliation #Cloud #Elasticsearch #Ansible #Java #AWS S3 (Amazon Simple Storage Service) #Kubernetes #Oracle #Python #SQL (Structured Query Language) #Docker #Prometheus
Role description
Kafka Developer/Engineer Job Description --- US Federal Government Project requirements mandates role open only for US Citizens only. ---- (No exceptions to this requirement. Ps do not apply if you are not a US Citizen) Job Description Infobahn Solutions is hiring a Kafka/Confluent developer with strong Change Data Capture skills in the Washington DC Metro Area for a US Government Federal Project . We are seeking an experienced Kafka Developer / Engineer with deep hands-on expertise in Confluent Platform (v7.9 and above) running on Red Hat OpenShift (OCP). The ideal candidate will design, develop, and optimize real-time data streaming pipelines, with a strong focus on Change Data Capture (CDC) integrations including Debezium/Confluent LogMiner Oracle Source Connectors and Confluent Sink Connectors (e.g., S3, Elasticsearch, JDBC) and Spark Streaming.This role involves working across hybrid environments (on-prem and cloud), integrating CDC feeds from legacy databases on-premise into Confluent clusters, and developing Spark-based data processing jobs. As a Confluent Developer, you will be responsible for designing, developing, and implementing real-time data processing and streaming solutions using Confluent Platform. You will work closely with cross-functional teams to understand business requirements and transform them into scalable, high-performance data pipelines. Your role will involve utilizing Apache Kafka and related technologies to build change event-driven applications, data integration processes, and real-time analytics solutions. The Kafka/Confluent developer will be part of an Event Streaming team that is implementing data streaming using Confluent - Kafka. You will be responsible for Designing, developing, and implementing real-time data pipelines and streaming applications using Confluent Platform and Apache Kafka. Project implementation Experience using Confluent is required. What you’ll be doing: Design and develop Cloud ETL processes and workflows in conjunction with Confluent Kafka Event Driven Architecture processes. Deploy and configure Debezium/Confluent CDC Source connectors i.e. Oracle LogMiner CDC Source Connectors for transactional data ingestion.Multiple other databases will be ingested onto the Confluent platform. Create and maintain Sink connectors for AWS S3, JDBC, and other downstream systems (e.g., Snowflake, Elastic, Databricks). Develop Python and Java-based Kafka clients, producers/consumers, and custom SMTs for advanced streaming and data transformation. Tune Kafka Connect and worker configurations for high-throughput, low-latency CDC ingestion. Implement Schema Registry best practices for schema evolution and compatibility management. Integrate Spark Structured Streaming jobs with Kafka topics for downstream enrichment, aggregation, or transformation. Collaborate with DevOps and Platform teams to deploy and manage Kafka Connectors, monitoring via Prometheus / Grafana, and log aggregation. Troubleshoot CDC offset and SCN misalignments (LogMiner), partition rebalances, or message serialization issues (Avro/JSON/Protobuf). Contribute to CI/CD automation for Kafka connector deployments using Helm, ArgoCD, or GitOps practices on OpenShift. DevOps Engineering using Kubernetes Role Requirements: Bachelor/Master’s degree in computer science, Engineering, or related field. 8 + years of proven experience as a Confluent developer or Kafka developer in a professional setting. 7 + years - Strong programming skills in Java, Scala, or Python. 8+ years solid experience as a database developer and in-depth knowledge of Oracle and other relational/No SQL databases. Solid understanding of Apache Kafka and Confluent Platform, including Kafka Streams and Kafka Connect. 3+ years working with Debezium CDC connectors β€” especially Oracle LogMiner Source Connector and other Confluent source connectors . Strong hands-on experience deploying Kafka Connect clusters and connectors on Red Hat OpenShift (OCP) or Kubernetes. Advanced skills with Python for data pipeline scripting, Kafka producer/consumer utilities, and connector automation. Working knowledge of Java (Kafka clients, SMTs, or microservices). Experience with Confluent Sink connectors, especially S3 Sink, JDBC Sink, and Elasticsearch Sink. Familiarity with Spark Structured Streaming and integration with Kafka. Proficient in Schema Registry, Avro/Protobuf serialization, and compatibility management. Strong understanding of Kafka Connect offsets, SCN-based replay, and error-handling patterns for CDC pipelines. Experience with monitoring and alerting via Prometheus / Grafana / Control Center. Knowledge of Linux, YAML, Helm, and containerization in OpenShift environments. Experience with data serialization formats like Avro, JSON, and Protobuf. Familiarity with messaging patterns, event sourcing, and stream processing concepts. Knowledge of containerization and orchestration technologies like Docker and Kubernetes is a plus. Excellent problem-solving skills and the ability to work independently and in a team. Strong communication skills to collaborate effectively with team members and stakeholders. More than 7 years of experience performing data reconciliation, data validation, ETL testing, deploying ETL packages and automating ETL jobs, developing reconciliation reports. Strong analytical skills applied to business software solutions maintenance and/or development Must be able to work with a team to write code, review code, and work on system operations. Dev-ops experience using GIT, developing, deploying code to production Proficient in using AWS Cloud Services for Data Engineering tasks Proficient in programming in Python/shell or other scripting languages for the purpose of data movement Eligible for a US Government issued IRS MBI (candidates with active IRS MBIs or Active Top Secret/Secret will be preferred) Preferred Qualifications Experience with Confluent for Kubernetes (CFK) operators and Helm-based deployment automation. Familiarity with Terraform, Ansible, or GitOps pipelines for connector and topic configuration management. Experience integrating Confluent with Delta Lake / Databricks. Exposure to Oracle 19c or 21c architectures, LogMiner SCN troubleshooting, and Oracle redo log tuning. Familiarity with AWS S3 sink integration patterns. Experience working in FedRAMP / DoD / IRS / Treasury compliant environments a strong plus. Confluent Developer industry certifications Job Types: Full-time, Contract Pay: $95,000.00 - $140,000.00 per year Benefits: Dental insurance Flexible schedule Health insurance Life insurance Paid time off Vision insurance Experience: AWS: 3 years (Required) CI/CD: 3 years (Required) Kafka: 7 years (Required) Java: 3 years (Required) Change Data Capture: 3 years (Required) Work Location: Remote