

360 Technology
Senior Data Engineer (Fixed Income / Capital Markets) | New York City, NY (Local Only) | W2 Only
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in New York City, NY, on a long-term W2 contract. Key skills include Kafka, Snowflake, and Python, with required experience in Fixed Income/Capital Markets and data pipeline development. Local candidates only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#GCP (Google Cloud Platform) #Spark (Apache Spark) #Compliance #SQL (Structured Query Language) #Data Pipeline #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Cloud #Data Processing #Automation #Data Quality #Snowflake #MIFID (Markets in Financial Instruments Directive) #Data Manipulation #Data Integration #Python #Batch #Data Modeling #Azure #Scala #Data Engineering
Role description
Role: Senior Data Engineer (Fixed Income / Capital Markets) | W2 Only
Location: New York City, NY (Onsite)
Duration: Long-Term Contract
Note: Need local candidates with strong Fixed Income / Capital Markets domain experience
Job Description
We are seeking a Senior Data Engineer to design, develop, and optimize data pipelines and analytics solutions supporting front-to-back office operations within the Fixed Income / Capital Markets domain. The ideal candidate will be an expert in Kafka, Snowflake, and Python, with a proven track record of building high-performance data systems for financial applications.
Key Responsibilities
• Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance.
• Architect, design, and deploy data integration, streaming, and analytics solutions using Spark, Kafka, and Snowflake.
• Collaborate across teams to ensure end-to-end delivery and proactively support peers in achieving project milestones.
• Troubleshoot and resolve technical performance issues; recommend tuning and optimization solutions.
• Design and maintain a Reference Data System leveraging Kafka, Snowflake, and Python.
• Ensure data quality, consistency, and compliance with enterprise and regulatory standards.
Required Skills & Experience
• Proven experience in data pipeline design, development, and maintenance using Kafka, Snowflake, and Python.
• Strong knowledge of distributed data processing and streaming architectures.
• Hands-on experience with Snowflake (data loading, optimization, and performance tuning).
• Proficiency in Python for data manipulation, automation, and orchestration.
• Expertise in Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams).
• Solid understanding of SQL, data modeling, and ETL/ELT frameworks.
• Exposure to cloud platforms (AWS, Azure, GCP) preferred.
Domain Expertise (Preferred)
• Experience in Trade Processing, Settlement, and Reconciliation across Equities, Fixed Income, Derivatives, or FX.
• Deep understanding of trade lifecycle events, order types, allocation rules, and settlement processes.
• Exposure to Funding, Planning & Analysis, Regulatory Reporting, and Compliance functions.
• Familiarity with regulatory frameworks such as Dodd-Frank, EMIR, MiFID II, or equivalent.
Role: Senior Data Engineer (Fixed Income / Capital Markets) | W2 Only
Location: New York City, NY (Onsite)
Duration: Long-Term Contract
Note: Need local candidates with strong Fixed Income / Capital Markets domain experience
Job Description
We are seeking a Senior Data Engineer to design, develop, and optimize data pipelines and analytics solutions supporting front-to-back office operations within the Fixed Income / Capital Markets domain. The ideal candidate will be an expert in Kafka, Snowflake, and Python, with a proven track record of building high-performance data systems for financial applications.
Key Responsibilities
• Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance.
• Architect, design, and deploy data integration, streaming, and analytics solutions using Spark, Kafka, and Snowflake.
• Collaborate across teams to ensure end-to-end delivery and proactively support peers in achieving project milestones.
• Troubleshoot and resolve technical performance issues; recommend tuning and optimization solutions.
• Design and maintain a Reference Data System leveraging Kafka, Snowflake, and Python.
• Ensure data quality, consistency, and compliance with enterprise and regulatory standards.
Required Skills & Experience
• Proven experience in data pipeline design, development, and maintenance using Kafka, Snowflake, and Python.
• Strong knowledge of distributed data processing and streaming architectures.
• Hands-on experience with Snowflake (data loading, optimization, and performance tuning).
• Proficiency in Python for data manipulation, automation, and orchestration.
• Expertise in Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams).
• Solid understanding of SQL, data modeling, and ETL/ELT frameworks.
• Exposure to cloud platforms (AWS, Azure, GCP) preferred.
Domain Expertise (Preferred)
• Experience in Trade Processing, Settlement, and Reconciliation across Equities, Fixed Income, Derivatives, or FX.
• Deep understanding of trade lifecycle events, order types, allocation rules, and settlement processes.
• Exposure to Funding, Planning & Analysis, Regulatory Reporting, and Compliance functions.
• Familiarity with regulatory frameworks such as Dodd-Frank, EMIR, MiFID II, or equivalent.






