

zAnswer LLC
Big Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer (Lead Level) on a long-term contract in NYC, NY. Key skills include Hadoop, Snowflake, Kafka, Spark, and data pipeline development. Requires 12+ years of experience in data engineering and cloud platforms.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 19, 2025
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
New York, NY
-
π§ - Skills detailed
#Cloud #Datasets #Azure #Scala #Data Pipeline #Visualization #Java #Data Analysis #DevOps #Data Processing #Big Data #Docker #Scripting #Batch #Airflow #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Hadoop #Tableau #SQL (Structured Query Language) #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #NiFi (Apache NiFi) #Snowflake #Redshift #Data Engineering #Kubernetes #Automation #Python #Kafka (Apache Kafka) #Data Lake #Programming #Microsoft Power BI #HDFS (Hadoop Distributed File System) #Storage #BigQuery #Pig #Data Quality #Security #Data Storage #BI (Business Intelligence)
Role description
Role: Big Data Engineer (Lead Level)
Location: NYC. NY (Onsite)
Job Type: Contract
Duration: Long Term
Overview
We are seeking a highly skilled Data Engineer with deep expertise in Big Data technologies, data lakes, and modern analytics platforms. The ideal candidate will design, build, and optimize scalable data pipelines that support advanced analytics and business intelligence. This role requires strong hands-on experience with Hadoop ecosystems, Snowflake, Kafka, Spark, and other distributed data platforms.
Responsibilities
β’ Design, develop, and maintain data pipelines for ingesting, transforming, and delivering large-scale datasets.
β’ Manage and optimize data lake architectures to ensure scalability, reliability, and performance.
β’ Implement and support Hadoop-based solutions for distributed data processing.
β’ Integrate and manage Snowflake for cloud-based data warehousing and analytics.
β’ Build and maintain real-time streaming solutions using Kafka.
β’ Develop and optimize Spark applications for batch and streaming workloads.
β’ Collaborate with data analysts, scientists, and business stakeholders to deliver actionable insights.
β’ Ensure data quality, governance, and security across all platforms.
β’ Monitor and troubleshoot data pipelines to maintain high availability and performance.
Skills & Qualifications
Core Skills
β’ Big Data Ecosystem: Hadoop (HDFS, Hive, Pig, MapReduce), Spark, Kafka.
β’ Cloud Data Warehousing: Snowflake (preferred), Redshift, BigQuery.
β’ Data Lake Management: Experience with large-scale data storage and retrieval.
β’ Data Pipelines: ETL/ELT design, orchestration tools (Airflow, NiFi, etc.).
β’ Programming & Scripting: Python, Scala, Java, SQL.
β’ Data Analysis: Strong ability to query, analyze, and interpret large datasets.
β’ Distributed Systems: Understanding of scalability, fault tolerance, and performance optimization.
β’ DevOps & Automation: CI/CD pipelines, containerization (Docker, Kubernetes).
β’ Visualization & BI Tools: Familiarity with Tableau, Power BI, or similar.
Preferred Qualifications
β’ 12+ years of experience in data engineering or big data roles.
β’ Experience with cloud platforms (AWS, Azure, GCP).
β’ Strong problem-solving and analytical mindset.
β’ Excellent communication and collaboration skills.
Role: Big Data Engineer (Lead Level)
Location: NYC. NY (Onsite)
Job Type: Contract
Duration: Long Term
Overview
We are seeking a highly skilled Data Engineer with deep expertise in Big Data technologies, data lakes, and modern analytics platforms. The ideal candidate will design, build, and optimize scalable data pipelines that support advanced analytics and business intelligence. This role requires strong hands-on experience with Hadoop ecosystems, Snowflake, Kafka, Spark, and other distributed data platforms.
Responsibilities
β’ Design, develop, and maintain data pipelines for ingesting, transforming, and delivering large-scale datasets.
β’ Manage and optimize data lake architectures to ensure scalability, reliability, and performance.
β’ Implement and support Hadoop-based solutions for distributed data processing.
β’ Integrate and manage Snowflake for cloud-based data warehousing and analytics.
β’ Build and maintain real-time streaming solutions using Kafka.
β’ Develop and optimize Spark applications for batch and streaming workloads.
β’ Collaborate with data analysts, scientists, and business stakeholders to deliver actionable insights.
β’ Ensure data quality, governance, and security across all platforms.
β’ Monitor and troubleshoot data pipelines to maintain high availability and performance.
Skills & Qualifications
Core Skills
β’ Big Data Ecosystem: Hadoop (HDFS, Hive, Pig, MapReduce), Spark, Kafka.
β’ Cloud Data Warehousing: Snowflake (preferred), Redshift, BigQuery.
β’ Data Lake Management: Experience with large-scale data storage and retrieval.
β’ Data Pipelines: ETL/ELT design, orchestration tools (Airflow, NiFi, etc.).
β’ Programming & Scripting: Python, Scala, Java, SQL.
β’ Data Analysis: Strong ability to query, analyze, and interpret large datasets.
β’ Distributed Systems: Understanding of scalability, fault tolerance, and performance optimization.
β’ DevOps & Automation: CI/CD pipelines, containerization (Docker, Kubernetes).
β’ Visualization & BI Tools: Familiarity with Tableau, Power BI, or similar.
Preferred Qualifications
β’ 12+ years of experience in data engineering or big data roles.
β’ Experience with cloud platforms (AWS, Azure, GCP).
β’ Strong problem-solving and analytical mindset.
β’ Excellent communication and collaboration skills.






