

engineersmind
Big Data Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with 3+ years of experience in Hadoop and Data Engineering. Key skills include proficiency in Hadoop tools, Apache Spark, Python, SQL, and ETL pipeline development. Work location is "Remote," and the pay rate is "$X/hour."
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 29, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Data Processing #PySpark #Data Wrangling #Data Modeling #Data Engineering #Apache Spark #HDFS (Hadoop Distributed File System) #Spark (Apache Spark) #Scripting #Automation #Shell Scripting #YARN (Yet Another Resource Negotiator) #Python #Big Data #Pig #Linux #Programming #"ETL (Extract #Transform #Load)" #Unix #Hadoop #Impala #Spark SQL
Role description
3+ years of hands-on experience as a Big Data / Hadoop Developer or Data Engineer.
Proficiency in Hadoop ecosystem tools: HDFS, MapReduce, YARN, Hive, Pig.
Strong experience with Apache Spark (PySpark, Spark SQL, Spark Streaming).
Working knowledge of Impala for data querying and performance optimization.
Solid programming skills in Python (for scripting, data wrangling, and automation).
Experience building ETL pipelines and working with large-scale data processing frameworks.
Strong understanding of SQL and data modeling concepts.
Familiarity with Linux/Unix environments and shell scripting.
3+ years of hands-on experience as a Big Data / Hadoop Developer or Data Engineer.
Proficiency in Hadoop ecosystem tools: HDFS, MapReduce, YARN, Hive, Pig.
Strong experience with Apache Spark (PySpark, Spark SQL, Spark Streaming).
Working knowledge of Impala for data querying and performance optimization.
Solid programming skills in Python (for scripting, data wrangling, and automation).
Experience building ETL pipelines and working with large-scale data processing frameworks.
Strong understanding of SQL and data modeling concepts.
Familiarity with Linux/Unix environments and shell scripting.






