

CoreTek Labs
Big Data Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with 12+ years of experience, offering a contract in NYC, NY. Key skills include Java/Python, Hadoop Ecosystem, Spark, and Greenplum. Strong Agile experience and data governance expertise are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 18, 2025
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
New York, United States
-
π§ - Skills detailed
#Scala #Monitoring #Spark (Apache Spark) #Data Processing #Agile #Databases #HDFS (Hadoop Distributed File System) #Python #Data Quality #Security #Kafka (Apache Kafka) #Data Pipeline #Documentation #Java #Compliance #Big Data #Greenplum #Datasets #Storage #Hadoop #Data Engineering
Role description
Role: Big Data Engineer (Only 12+ Years Exp).
Location: NYC, NY (Weekly 5 Days Office)
Mode of Hire: Contract
Job Description:
Mandatory Skills: Coding expert in Java/Python, Hadoop Ecosystem(HDFS), Spark, Hive.
β’ Strong work experience - Agile environment preferred
β’ Data Engineer (Big Data β Hadoop, Green Plum, etc. , Data Owner)
β’ Designs and builds scalable data pipelines, integrates diverse sources, and optimizes storage/processing using Hadoop ecosystem and Greenplum.
β’ Ensures data quality, security, and compliance through governance frameworks.
β’ Implements orchestration, monitoring, and performance tuning for reliable, cost-efficient operations.
β’ Expertise in Hadoop ecosystem (HDFS, Hive, Spark, Kafka) and MPP databases like Greenplum for large-scale data processing and optimization.
β’ Collaborates with Data Owners and stakeholders to translate business rules into technical solutions.
β’ Delivers curated datasets, lineage, and documentation aligned with SLAs and regulatory standards.
β’ Subject matter expert having experience of interacting with client, understanding the requirement, and guiding the team.
β’ Documenting the requirements clearly with defined scope and must play a anchor role in setting the right expectations and delivering as per the schedule
Role: Big Data Engineer (Only 12+ Years Exp).
Location: NYC, NY (Weekly 5 Days Office)
Mode of Hire: Contract
Job Description:
Mandatory Skills: Coding expert in Java/Python, Hadoop Ecosystem(HDFS), Spark, Hive.
β’ Strong work experience - Agile environment preferred
β’ Data Engineer (Big Data β Hadoop, Green Plum, etc. , Data Owner)
β’ Designs and builds scalable data pipelines, integrates diverse sources, and optimizes storage/processing using Hadoop ecosystem and Greenplum.
β’ Ensures data quality, security, and compliance through governance frameworks.
β’ Implements orchestration, monitoring, and performance tuning for reliable, cost-efficient operations.
β’ Expertise in Hadoop ecosystem (HDFS, Hive, Spark, Kafka) and MPP databases like Greenplum for large-scale data processing and optimization.
β’ Collaborates with Data Owners and stakeholders to translate business rules into technical solutions.
β’ Delivers curated datasets, lineage, and documentation aligned with SLAs and regulatory standards.
β’ Subject matter expert having experience of interacting with client, understanding the requirement, and guiding the team.
β’ Documenting the requirements clearly with defined scope and must play a anchor role in setting the right expectations and delivering as per the schedule






