Coda Searchβ”‚Staffing

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in Spark/Scala to enhance a financial data warehouse in NYC. Contract length is unspecified, with a pay rate of "unknown." Requires 4+ years in Scala/Spark, Kafka knowledge, and financial services experience preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 9, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#C++ #Programming #Java #AWS (Amazon Web Services) #Automated Testing #Apache Spark #Datasets #Data Architecture #AWS EMR (Amazon Elastic MapReduce) #Scala #Snowflake #EDW (Enterprise Data Warehouse) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #C# #Data Engineering #Kafka (Apache Kafka) #Data Warehouse #Angular #Data Pipeline #Cloud
Role description
Spark / Scala Data Engineer - Streaming Technology Consultant Location: Onsite - NYC Contract Opportunity We are partnering with a leading organization to hire a Spark / Scala Data Engineer to support and enhance a large-scale enterprise data warehouse platform that serves as the single source of truth for financial data across the business. This role focuses primarily on Spark and Scala development, working within a modern, highly scalable data ecosystem built on technologies such as Spark, Kafka, AWS, and Snowflake. The engineer will contribute to building and optimizing high-volume data pipelines and supporting a streaming data platform used across the enterprise. The right candidate will bring strong technical expertise while also contributing ideas that improve platform design and overall data architecture. Principal Responsibilities β€’ Design, develop, and maintain Spark ETL data pipelines using Scala and Kafka β€’ Integrate new data feeds and optimize existing data pipelines β€’ Perform application profiling and performance tuning β€’ Provide second-line support for data services and related platform components β€’ Collaborate with cross-functional teams to deliver solutions that handle large volumes of data β€’ Ensure data is ingested, curated, standardized, stored, and managed appropriately across the platform β€’ Take end-to-end ownership of development tasks and deliverables Required Qualifications β€’ 4+ years of hands-on Scala and Apache Spark development experience (including Spark Streaming) β€’ Experience with automated testing, including unit, integration, and performance testing β€’ Background in an object-oriented programming language such as Java, C++, or C# β€’ 2+ years of professional database development experience across multiple technologies β€’ Basic knowledge of Kafka and event-driven data pipelines β€’ Strong analytical and problem-solving skills with the ability to work independently in a fast-paced environment β€’ Strong communication skills and ability to collaborate with global development teams Preferred Experience β€’ Financial services industry experience β€’ Familiarity with datasets such as Trades, Positions, P&L, Risk Sensitivities, or Reference Data Technology Exposure This role offers exposure to a modern technology stack including: β€’ AWS (EMR, EKS, and other cloud services) β€’ Snowflake β€’ Kafka β€’ Spark / Scala β€’ Angular and Java for platform enhancements