Arkhya Tech. Inc.

Senior Kafka Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Kafka Data Engineer contract position, remote, with a pay rate of "unknown". Requires 8+ years of data engineering experience, expertise in Confluent Kafka, Apache Flink or Spark, SQL, and proficiency in Python, Java, or .NET.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 30, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Spark (Apache Spark) #Data Modeling #Kafka (Apache Kafka) #Apache Spark #Kubernetes #Docker #SQL Server #DevOps #"ETL (Extract #Transform #Load)" #.Net #Batch #SQL (Structured Query Language) #Python #Data Engineering #C# #SSIS (SQL Server Integration Services) #Strategy #Observability #Data Pipeline #Cloud #Java #Data Quality
Role description
Title : Senior Kafka Data Engineer Location: Remote Job Type : Contract Role Overview We are hiring a Senior/Staff Data Engineer to lead the architecture and development of enterprise‑scale streaming data pipelines, with Kafka as the core technology. This is a hands‑on builder and owner role involving architecture decisions, production support, and direct collaboration with onsite US clients. Key Responsibilities • Architect, build, and operate real‑time and near‑real‑time Kafka‑based pipelines • Lead development using Confluent Kafka (core focus) • Design and implement stream processing using: • Apache Flink (preferred) or • Apache Spark Structured Streaming • Define Kafka best practices for: • Topic design, partitioning, and delivery semantics • Error handling and retries • Backpressure management and performance tuning • Design and maintain batch and hybrid ETL pipelines using SQL Server • ETL tools preferred: SSIS • Own data quality, reliability, and observability across streaming pipelines • Work directly with onsite clients to: • Translate business requirements into technical solutions • Defend architectural decisions • Troubleshoot complex Kafka and streaming production issues • Mentor engineers and raise overall engineering standards • Collaborate with DevOps and platform teams in containerized, distributed environments Required Skills & Experience • 8+ years of hands‑on data engineering experience • Deep expertise in Confluent Kafka: • Architecture and cluster design • Topic and partition strategy • Delivery semantics (exactly‑once, at‑least‑once, etc.) • Strong hands‑on experience with: • Apache Flink (preferred) or • Apache Spark Structured Streaming • Excellent SQL skills (complex analytics, performance tuning) • Strong experience in Python, Java, or .NET (C#) • (At least one OOP language is mandatory) • Solid understanding of: • Streaming and event‑driven architectures • ETL / ELT patterns • Data modeling and orchestration • Experience with SQL Server Nice to Have • SSIS • Docker / Kubernetes • CI/CD for data pipelines • Cloud or hybrid data platforms