

Coda SearchβStaffing
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract opportunity in NYC, focusing on Spark and Scala development. Requires 4+ years of Scala/Spark experience, database development, and familiarity with Kafka. Financial services industry experience preferred. Pay rate and contract length unspecified.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 10, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
New York City Metropolitan Area
-
π§ - Skills detailed
#Datasets #Snowflake #Angular #C# #AWS (Amazon Web Services) #Cloud #AWS EMR (Amazon Elastic MapReduce) #Scala #Data Pipeline #Data Warehouse #Data Architecture #Apache Spark #Data Engineering #Programming #Automated Testing #EDW (Enterprise Data Warehouse) #"ETL (Extract #Transform #Load)" #C++ #Spark (Apache Spark) #Kafka (Apache Kafka) #Java
Role description
Spark / Scala Data Engineer - Streaming Technology Consultant
Location: Onsite - NYC
Contract Opportunity
We are partnering with a leading organization to hire a Spark / Scala Data Engineer to support and enhance a large-scale enterprise data warehouse platform that serves as the single source of truth for financial data across the business.
This role focuses primarily on Spark and Scala development, working within a modern, highly scalable data ecosystem built on technologies such as Spark, Kafka, AWS, and Snowflake. The engineer will contribute to building and optimizing high-volume data pipelines and supporting a streaming data platform used across the enterprise.
The right candidate will bring strong technical expertise while also contributing ideas that improve platform design and overall data architecture.
Principal Responsibilities
β’ Design, develop, and maintain Spark ETL data pipelines using Scala and Kafka
β’ Integrate new data feeds and optimize existing data pipelines
β’ Perform application profiling and performance tuning
β’ Provide second-line support for data services and related platform components
β’ Collaborate with cross-functional teams to deliver solutions that handle large volumes of data
β’ Ensure data is ingested, curated, standardized, stored, and managed appropriately across the platform
β’ Take end-to-end ownership of development tasks and deliverables
Required Qualifications
β’ 4+ years of hands-on Scala and Apache Spark development experience (including Spark Streaming)
β’ Experience with automated testing, including unit, integration, and performance testing
β’ Background in an object-oriented programming language such as Java, C++, or C#
β’ 2+ years of professional database development experience across multiple technologies
β’ Basic knowledge of Kafka and event-driven data pipelines
β’ Strong analytical and problem-solving skills with the ability to work independently in a fast-paced environment
β’ Strong communication skills and ability to collaborate with global development teams
Preferred Experience
β’ Financial services industry experience
β’ Familiarity with datasets such as Trades, Positions, P&L, Risk Sensitivities, or Reference Data
Technology Exposure
This role offers exposure to a modern technology stack including:
β’ AWS (EMR, EKS, and other cloud services)
β’ Snowflake
β’ Kafka
β’ Spark / Scala
β’ Angular and Java for platform enhancements
Spark / Scala Data Engineer - Streaming Technology Consultant
Location: Onsite - NYC
Contract Opportunity
We are partnering with a leading organization to hire a Spark / Scala Data Engineer to support and enhance a large-scale enterprise data warehouse platform that serves as the single source of truth for financial data across the business.
This role focuses primarily on Spark and Scala development, working within a modern, highly scalable data ecosystem built on technologies such as Spark, Kafka, AWS, and Snowflake. The engineer will contribute to building and optimizing high-volume data pipelines and supporting a streaming data platform used across the enterprise.
The right candidate will bring strong technical expertise while also contributing ideas that improve platform design and overall data architecture.
Principal Responsibilities
β’ Design, develop, and maintain Spark ETL data pipelines using Scala and Kafka
β’ Integrate new data feeds and optimize existing data pipelines
β’ Perform application profiling and performance tuning
β’ Provide second-line support for data services and related platform components
β’ Collaborate with cross-functional teams to deliver solutions that handle large volumes of data
β’ Ensure data is ingested, curated, standardized, stored, and managed appropriately across the platform
β’ Take end-to-end ownership of development tasks and deliverables
Required Qualifications
β’ 4+ years of hands-on Scala and Apache Spark development experience (including Spark Streaming)
β’ Experience with automated testing, including unit, integration, and performance testing
β’ Background in an object-oriented programming language such as Java, C++, or C#
β’ 2+ years of professional database development experience across multiple technologies
β’ Basic knowledge of Kafka and event-driven data pipelines
β’ Strong analytical and problem-solving skills with the ability to work independently in a fast-paced environment
β’ Strong communication skills and ability to collaborate with global development teams
Preferred Experience
β’ Financial services industry experience
β’ Familiarity with datasets such as Trades, Positions, P&L, Risk Sensitivities, or Reference Data
Technology Exposure
This role offers exposure to a modern technology stack including:
β’ AWS (EMR, EKS, and other cloud services)
β’ Snowflake
β’ Kafka
β’ Spark / Scala
β’ Angular and Java for platform enhancements






