Jobs via Dice

Immediate Interview: Data Engineer with Snowflake Exp_ NYC (Onsite Position)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Snowflake experience, based in NYC (5 days onsite), offering a long-term contract. Requires 5+ years in software engineering, expertise in Python, Kubernetes, and SQL databases, and a relevant degree.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 11, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New York, NY
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Python #Quality Assurance #Data Warehouse #Data Ingestion #Statistics #API (Application Programming Interface) #Logstash #RDS (Amazon Relational Database Service) #Computer Science #Angular #GitHub #Agile #Automation #Data Architecture #Data Engineering #Strategy #Infrastructure as Code (IaC) #Unit Testing #Terraform #Scala #Django #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Aurora #Kubernetes #Jenkins #AWS (Amazon Web Services) #SQL (Structured Query Language) #Data Science #Elasticsearch #dbt (data build tool) #Cloud #NoSQL #Databases #PostgreSQL #SageMaker #Snowflake
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Svam International, Inc., is seeking the following. Apply via Dice today! Data Engineer with Snowflake Exp Location: NYC (5 days onsite) Long term Contract We are seeking a motivated engineer with strong full-stack data engineering skills to join our innovative, dynamic team. This role focuses on building reliable, scalable data products and user experiences that power AI/ML modeling, agentic workflows, and reporting. You will work end-to-end - from data ingestion and transformation through to UI - to deliver production-grade solutions in a collaborative, fast-paced environment. Our application stack runs entirely on AWS and includes Angular for the frontend; Python/Django with AWS‑managed PostgreSQL (RDS/Aurora) for the API layer; Elasticsearch for search; SageMaker for machine learning; and Python/Celery for background processing. We also leverage Terraform for infrastructure as code, GitHub Actions for CI/CD, and Kubernetes (EKS) for container orchestration. We are investing heavily in our data architecture, leveraging Snowflake, data transformation tooling (e.g. dbt), and modern data ingestion frameworks. Key Responsibilities: β€’ Collaborative development: - partner with business stakeholders, data scientists, and engineering teammates to define and adopt modern data engineering practices. β€’ Full-stack data engineering: - build across the entire stack, including data ingestion/acquisition and transformation, APIs, front-end components, and automated test suites. β€’ Specification and design: - translate short- and long-term business requirements, architectural considerations, and competing timelines into clear, actionable specifications. β€’ Code quality: - write clean, maintainable, efficient code that adheres to evolving standards and quality processes, including unit tests and isolated integration tests in containerized environments. β€’ Continuous improvement: - contribute to agile practices and provide input on technical strategy, architectural decisions, and process improvements. Required Skills & Experience: β€’ Professional experience: 5+ years in software engineering, with a full-stack background building data-intensive applications using Python, Kubernetes, relational and non-relational databases, and modern UI technologies. β€’ Backend expertise: 3+ years working with Python and Django; building scalable, containerized services with robust APIs and comprehensive unit/integration tests. β€’ Modern data engineering: Strong experience with relational SQL databases (e.g. PostgreSQL), data warehouses (e.g. Snowflake), Data Transformation tooling (e.g. dbt), and NoSQL databases. β€’ Testing and QA: Solid understanding of unit testing, CI/CD automation, and quality assurance processes to ensure reliable, maintainable code. β€’ Agile methodology: Working knowledge of Agile development practices and workflows. β€’ Education: Bachelor’s or Master’s degree in Computer Science, Statistics, Informatics, Information Systems, or a related quantitative field. Preferred Skills & Experience: β€’ Machine learning and AI: Hands-on experience with large language models (LLMs) and agentic frameworks/workflows. β€’ Search and analytics: Familiarity with the ELK stack (Elasticsearch, Logstash, Kibana) for search and analytics solutions. β€’ Cloud expertise: Experience with AWS cloud services; familiarity with SageMaker; and CI/CD tooling such as GitHub Actions or Jenkins. β€’ Front-end expertise: Experience building user interfaces with Angular or a modern UI stack. β€’ Financial domain knowledge: Broad understanding of equities, fixed income, derivatives, futures, FX, and other financial instruments.