

DKMRBH Inc
Senior Scala Spark Engineer (Kafka, AWS, Streaming Data)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Scala Spark Engineer in NYC, with a contract length of "unknown" and a pay rate of "unknown." Required skills include 4+ years in Scala, Apache Spark, Kafka, and experience with ETL/data pipelines.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 20, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
New York City Metropolitan Area
-
π§ - Skills detailed
#C++ #BI (Business Intelligence) #Data Pipeline #Datasets #Kafka (Apache Kafka) #EDW (Enterprise Data Warehouse) #Java #Data Warehouse #Scala #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #C# #Spark (Apache Spark) #Apache Spark #Batch
Role description
Title :Senior Scala Spark Engineer (Kafka, AWS, Streaming Data)
Location: NYC
Work Model: Onsite / Hybrid (as per client)
Visa: Open
Interview: Technical rounds focused on Spark + problem solving
Role Snapshot
β’ Owning Spark/Scala pipelines powering enterprise data warehouse systems
β’ Working on streaming + batch ingestion (Kafka + Spark)
β’ Tuning performance, fixing bottlenecks, and supporting live systems
Environment
β’ High-scale, data-intensive financial platform
β’ Streaming + distributed systems (Spark EMR, Kafka, AWS, EKS)
β’ Fast-paced, production-first engineering culture
What this role actually owns day-to-day
β’ Build and evolve Spark ETL pipelines using Scala
β’ Add and onboard new data feeds into Kafka/Spark pipelines
β’ Tune jobs for performance (memory, partitions, execution plans)
β’ Support production pipelines and debug failures under load
β’ Work with data consumers (BI, analytics, trading systems) to shape usable datasets
β’ Own delivery end-to-endβfrom development through release and support
Key Responsibilities
β’ Write and maintain Spark jobs (Scala) handling high-volume data
β’ Integrate Kafka streams into batch + streaming pipelines
β’ Profile jobs and optimize execution time and resource usage
β’ Handle pipeline failures, reruns, and production fixes
β’ Build and maintain automated tests (unit + integration + performance)
β’ Collaborate with engineering and data teams across regions
Must-Have Requirements (Non-Negotiable)
β’ 4+ years hands-on Scala + Apache Spark (including streaming) in production
β’ Experience building and maintaining ETL/data pipelines at scale
β’ Strong understanding of distributed processing and performance tuning
β’ Experience with Kafka or event-driven data pipelines
β’ Solid background in Java, C++, or C#
β’ Database experience across relational or distributed systems
Title :Senior Scala Spark Engineer (Kafka, AWS, Streaming Data)
Location: NYC
Work Model: Onsite / Hybrid (as per client)
Visa: Open
Interview: Technical rounds focused on Spark + problem solving
Role Snapshot
β’ Owning Spark/Scala pipelines powering enterprise data warehouse systems
β’ Working on streaming + batch ingestion (Kafka + Spark)
β’ Tuning performance, fixing bottlenecks, and supporting live systems
Environment
β’ High-scale, data-intensive financial platform
β’ Streaming + distributed systems (Spark EMR, Kafka, AWS, EKS)
β’ Fast-paced, production-first engineering culture
What this role actually owns day-to-day
β’ Build and evolve Spark ETL pipelines using Scala
β’ Add and onboard new data feeds into Kafka/Spark pipelines
β’ Tune jobs for performance (memory, partitions, execution plans)
β’ Support production pipelines and debug failures under load
β’ Work with data consumers (BI, analytics, trading systems) to shape usable datasets
β’ Own delivery end-to-endβfrom development through release and support
Key Responsibilities
β’ Write and maintain Spark jobs (Scala) handling high-volume data
β’ Integrate Kafka streams into batch + streaming pipelines
β’ Profile jobs and optimize execution time and resource usage
β’ Handle pipeline failures, reruns, and production fixes
β’ Build and maintain automated tests (unit + integration + performance)
β’ Collaborate with engineering and data teams across regions
Must-Have Requirements (Non-Negotiable)
β’ 4+ years hands-on Scala + Apache Spark (including streaming) in production
β’ Experience building and maintaining ETL/data pipelines at scale
β’ Strong understanding of distributed processing and performance tuning
β’ Experience with Kafka or event-driven data pipelines
β’ Solid background in Java, C++, or C#
β’ Database experience across relational or distributed systems





