

Confluent Kafka Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Confluent Kafka Developer in Minnesota, offering a 6-month contract with a pay rate of "unknown." Requires 3+ years in software development, 2+ years in Python, and expertise in Confluent Kafka, AWS, and Terraform.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date discovered
August 7, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Corp-to-Corp (C2C)
-
π - Security clearance
Unknown
-
π - Location detailed
Minnesota, United States
-
π§ - Skills detailed
#Terraform #Lambda (AWS Lambda) #EC2 #Cloud #Java #IAM (Identity and Access Management) #Data Science #Kafka (Apache Kafka) #S3 (Amazon Simple Storage Service) #Data Quality #Monitoring #Infrastructure as Code (IaC) #Deployment #Docker #Data Integration #Python #AWS (Amazon Web Services) #DevOps #Apache Kafka #Data Pipeline #Data Engineering #"ETL (Extract #Transform #Load)" #Programming #Kubernetes #Data Processing #Scala #Snowflake
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Sr. Confluent Kafka Developer
Location: Minnesota
Contract Type: 6months c2c then full time to client
Job Overview:
We are seeking an experienced Confluent Cloud Kafka Developer to design, develop, and manage scalable, real-time data streaming solutions using Apache Kafka on Confluent Cloud. The ideal candidate will work with data engineering teams to architect streaming data pipelines, integrate diverse data sources, and optimize Kafka for high-performance applications. This role requires strong expertise in Apache Kafka, Confluent Cloud, and distributed streaming platforms, with a focus on real-time integrations.
Key Responsibilities:
Develop, implement, and manage streaming data pipelines using Confluent Kafka and Apache Flink.
Collaborate with data engineering teams to integrate streaming solutions with existing data platforms such as AWS and Snowflake.
Design, build, and maintain real-time data pipelines and ensure high data quality, integrity, and reliability.
Utilize Terraform to manage and automate code deployments on AWS or other cloud environments.
Develop and maintain CI/CD pipelines for deploying and managing Kafka clusters and related infrastructure.
Collaborate with development, DevOps, and data science teams to understand data needs and deliver optimal solutions.
Provide support and guidance on Kafka-related issues and best practices for other teams.
Stay updated with the latest Kafka and streaming technologies and provide recommendations for improvements.
Required Qualifications:
3+ years of experience in software development, with a strong focus on Apache Kafka development and distributed systems.
2+ years of experience in Python for building Kafka applications
Proven experience with Confluent Kafka, including Schema Registry
Experience with Infrastructure as Code (IaC) tools, particularly Terraform.
Hands-on experience with AWS services (e.g., EC2, S3, IAM, Lambda).
Solid understanding of data integration, ETL processes, and data pipeline orchestration.
Experience with operational monitoring and performance optimization of streaming pipelines.
Ability to troubleshoot and resolve complex technical issues related to Kafka and its ecosystem.
Strong analytical and problem-solving skills, with attention to detail.
Excellent verbal and written communication skills.
Ability to work effectively in a team environment and collaborate with cross-functional teams.
Preferred Qualifications:
2+ years of experience working with Confluent Cloud and implementing Kafka-based solutions.
Proficiency with Apache Flink, ksqlDB, and Kafka Connect for real-time data processing and integration.
Working knowledge of Snowflake for data warehousing and analytics in cloud environments.
Strong programming skills in Java, with the ability to build and maintain scalable, event-driven applications.
Experience with other data streaming tools or platforms is a plus.
Familiarity with DevOps practices and tools, including CI/CD pipelines, Docker, and Kubernetes, for building and deploying modern cloud-native applications.
Certifications in AWS, Apache Kafka, or related cloud and data streaming technologies are highly desirable.