VySystems

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 10+ years of experience, located in Broadway, NY. The position requires expertise in Snowflake on AWS, AI-driven data platforms, and strong skills in Python and SQL. On-site work is mandatory.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Manhattan, NY
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #dbt (data build tool) #Snowflake #Metadata #Lambda (AWS Lambda) #Data Engineering #S3 (Amazon Simple Storage Service) #Scala #Data Architecture #Data Modeling #Airflow #AWS (Amazon Web Services) #Databases #Cloud #Big Data #AWS S3 (Amazon Simple Storage Service) #Strategy #EC2 #Spark (Apache Spark) #AI (Artificial Intelligence) #Leadership #Deployment #Programming #Redshift #Python #Code Reviews #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Datasets #Athena #Data Orchestration #IAM (Identity and Access Management) #SQL (Structured Query Language) #Data Pipeline
Role description
Role - Data Engineer 10+ Years mandatory Location - Broadway, NY Only Tri-States Candidates We are hiring a Lead Data Engineer / Snowflake Engineer with deep expertise in Snowflake on AWS and hands-on exposure to Agentic AI / Generative AI–driven data platforms. This role will lead the design, modernization, and scaling of cloud‑native data architectures that power analytics and AI initiatives. Key Responsibilities Architect, design, and optimize Snowflake data platforms on AWS for high performance and cost efficiency Lead end‑to‑end ELT/ETL pipelines using Snowflake, AWS services, and modern data engineering tools Implement advanced Snowflake features (Performance Optimization, Warehousing strategy, Data Sharing, Streams & Tasks, Time Travel, Zero Copy Cloning) Design data foundations that support Agentic AI and GenAI workloads, including AI‑ready datasets, vectorized data, and metadata‑driven pipelines Collaborate with AI/ML teams to enable autonomous agents, LLM‑driven analytics, and intelligent data orchestration Provide technical leadership, code reviews, and mentoring to data engineering teams Partner with business and product stakeholders to translate analytics and AI requirements into scalable data solutions Required Skills & Experience 8–10 years of experience in Big Data Engineering / Analytics Expert‑level Snowflake experience, including large‑scale production deployments Strong hands-on experience with AWS (S3, EC2, Lambda, Glue, Redshift/Athena, IAM, CloudWatch, Step Functions) Proven experience building cloud‑native data architectures on AWS Solid programming skills in Python and SQL Experience with data modeling for analytics and AI use cases Hands-on or applied exposure to Agentic AI, Generative AI, or AI‑driven data platforms Experience leading or mentoring engineering teams in enterprise environments Highly Desirable Experience integrating LLMs, autonomous agents, or AI orchestration frameworks with data platforms Exposure to vector databases, embeddings, or AI‑optimized data pipelines Experience with dbt, Airflow, Kafka, Spark, or similar tools Prior onsite experience in large, complex enterprise data ecosystems