Phyton Talent Advisors

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in New York, NY, on a contract basis. Pay rate is competitive. Requires 8+ years of data engineering experience, strong ETL/ELT skills, deep Snowflake expertise, and familiarity with AWS services and Kubernetes.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
680
-
πŸ—“οΈ - Date
February 12, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New York, NY
-
🧠 - Skills detailed
#Unit Testing #Python #Schema Design #S3 (Amazon Simple Storage Service) #Redis #SQL (Structured Query Language) #AWS (Amazon Web Services) #Prometheus #TypeScript #JavaScript #"ETL (Extract #Transform #Load)" #Pytest #Clustering #Observability #Cloud #React #Data Architecture #Snowflake #Grafana #Data Engineering #Logging #Batch #Kubernetes #Data Processing #SQS (Simple Queue Service) #Scala #Lambda (AWS Lambda) #Monitoring #Data Pipeline
Role description
Our Client, a Global Financial Services Firm, is seeking a Data Engineer in their New York, NY location. Join a global Risk Technology team building and supporting an enterprise risk platform that enables Risk Management to oversee market and credit exposures across the firm. This platform powers the measurement, analysis, reporting, and control of risk at scale. We’re looking for a hands-on Data Engineer to design and operate high-scale, cloud-native data pipelines and deliver reliable, performant data solutions that support critical risk decisions. What You’ll Do β€’ Design, implement, and operate ETL/ELT pipelines in Python to ingest, cleanse, transform, and publish data into Snowflake β€’ Build scalable Snowflake data models and SQL transformations (tables, views, materialized views) β€’ Optimize Snowflake performance using micro-partitioning, clustering keys, Streams/Tasks, and query profiling β€’ Engineer event-driven integrations using AWS services (S3, SQS, Lambda) for near-real-time data processing β€’ Optimize distributed computation using strong data structures, partitioning strategies, vectorization, and columnar formats β€’ Implement cache-aside patterns using ElastiCache/Redis to improve performance and reliability β€’ Own observability across pipelines: metrics, logging, tracing, cost monitoring, and operational runbooks β€’ Partner with risk, engineering, and product stakeholders to deliver scalable and resilient data solutions Required Experience & Skills β€’ 8+ years of data engineering experience building production-grade Python pipelines at scale β€’ Strong ETL/ELT architecture experience across batch and streaming environments β€’ Deep Snowflake expertise: schema design, clustering, Streams/Tasks, query tuning, and cost optimization β€’ Strong understanding of distributed systems, concurrency, idempotency, and data reliability patterns β€’ Experience with Kubernetes and containerized compute within AWS environments β€’ CI/CD implementation, unit testing (pytest), and observability tooling (CloudWatch, Prometheus, Grafana, OpenTelemetry) Nice to Have β€’ Experience building responsive, modular UIs using React (hooks, context, functional components) and TypeScript/JavaScript β€’ Exposure to real-time or event-driven data architectures in enterprise environments