

CoreTek Labs
Data Engineer (Greenplum)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer (Greenplum) position in New York, NY, for 12+ years of experience, offering up to $65/hr. Key skills include Greenplum, Big Data, and Hadoop. H1B visa holders only; PP number required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
January 31, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Datasets #Storage #Data Quality #Data Engineering #HDFS (Hadoop Distributed File System) #Scala #Big Data #Security #Databases #Kafka (Apache Kafka) #Spark (Apache Spark) #Data Pipeline #Monitoring #Documentation #Data Processing #Hadoop #Compliance #"ETL (Extract #Transform #Load)" #Agile #Greenplum
Role description
Role: Data Engineer
Location: New York, NY
Exp: 12+ Years
Rate: Up to $65/hr
Only H1B visa holders – PP number is mandatory
Primary Skill: Greenplum, Big Data, Hadoop
Job Description :
• Strong work experience - Agile environment preferred Data Engineer (Big Data – Hadoop, Green Plum, etc. , Data Owner) Designs and builds scalable data pipelines, integrates diverse sources, and optimizes storage/processing using Hadoop ecosystem and Greenplum.
• Ensures data quality, security, and compliance through governance frameworks. Implements orchestration, monitoring, and performance tuning for reliable, cost-efficient operations.
• Expertise in Hadoop ecosystem (HDFS, Hive, Spark, Kafka) and MPP databases like Greenplum for large-scale data processing and optimization.
• Collaborates with Data Owners and stakeholders to translate business rules into technical solutions.
• Delivers curated datasets, lineage, and documentation aligned with SLAs and regulatory standards.
• Subject matter expert having experience of interacting with client, understanding the requirement and guiding the team.
• Documenting the requirements clearly with defined scope and must play a anchor role in setting the right expectations and delivering as per the schedule.
• Design and develop scalable data pipelines using Hadoop ecosystem and Greenplum for ingestion, transformation, and storage of large datasets.
• Optimize data models and queries for performance and reliability, ensuring compliance with security and governance standards.
• Implement data quality checks, monitoring, and orchestration workflows for timely and accurate data delivery.
• Collaborate with Data Owners and business teams to translate requirements into technical solutions and maintain documentation and lineage.
Role: Data Engineer
Location: New York, NY
Exp: 12+ Years
Rate: Up to $65/hr
Only H1B visa holders – PP number is mandatory
Primary Skill: Greenplum, Big Data, Hadoop
Job Description :
• Strong work experience - Agile environment preferred Data Engineer (Big Data – Hadoop, Green Plum, etc. , Data Owner) Designs and builds scalable data pipelines, integrates diverse sources, and optimizes storage/processing using Hadoop ecosystem and Greenplum.
• Ensures data quality, security, and compliance through governance frameworks. Implements orchestration, monitoring, and performance tuning for reliable, cost-efficient operations.
• Expertise in Hadoop ecosystem (HDFS, Hive, Spark, Kafka) and MPP databases like Greenplum for large-scale data processing and optimization.
• Collaborates with Data Owners and stakeholders to translate business rules into technical solutions.
• Delivers curated datasets, lineage, and documentation aligned with SLAs and regulatory standards.
• Subject matter expert having experience of interacting with client, understanding the requirement and guiding the team.
• Documenting the requirements clearly with defined scope and must play a anchor role in setting the right expectations and delivering as per the schedule.
• Design and develop scalable data pipelines using Hadoop ecosystem and Greenplum for ingestion, transformation, and storage of large datasets.
• Optimize data models and queries for performance and reliability, ensuring compliance with security and governance standards.
• Implement data quality checks, monitoring, and orchestration workflows for timely and accurate data delivery.
• Collaborate with Data Owners and business teams to translate requirements into technical solutions and maintain documentation and lineage.






