

Big Data Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer in Jersey City, NJ/NYC, NY, offering a contract with a pay rate of "TBD." Key skills include Java Spark, AWS (S3, Glue, Athena), Python, SQL, and Unix Shell Scripting.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 1, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Jersey City, NJ
-
π§ - Skills detailed
#Data Lake #Snowflake #SQL (Structured Query Language) #Unix #Java #Shell Scripting #Scripting #Logging #Spark (Apache Spark) #AWS (Amazon Web Services) #Python #Jira #Batch #AWS S3 (Amazon Simple Storage Service) #Scala #Big Data #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Storage #Data Ingestion #Athena #Distributed Computing
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Java Spark Developer
Location: Jersey City, NJ/ NYC, NY
Job Description:
Mandatory Skills:
β’ Strong experience in Java Spark, understanding of distributed computing, big data principles, and batch processing.
β’ Proficiency in understanding and working with AWS services, especially S3, Glue and Athena.
Detailed Description:
β’ Combine interface design concepts with digital design and establish milestones to encourage cooperation and teamwork.
β’ Develop overall concepts for improving the user experience within a business webpage or product, ensuring all interactions are intuitive and convenient for customers.
β’ Collaborate with back-end web developers and programmers to improve usability.
β’ Conduct thorough testing of user interfaces in multiple platforms to ensure all designs render correctly and systems function properly.
β’ Strong experience in Java Spark, understanding of distributed computing, big data principles, and batch processing. Proficiency in understanding and working with AWS services, especially S3, Glue and Athena.
β’ Understanding of Data Lake architectures and handling large volumes of structured and unstructured data. Good hands-on experience in Snowflake and SQL. Building scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Familiarity with various data formats.
β’ Candidate should be good in Python, who can create the python scripts to automate the data transformation processes.
β’ Required good experience in Unix Commands and Shell Scripting so he/she can create or enhance/enrich the existing shell scripts to add new functionality for the logging, error handling.
β’ Tools like IntelliJ (IDE), Confluence and Jira are must and Control-M (scheduling tool) is optional.
Good to have:
β’ Python and SQL Skills
β’ Unix-Shell Scripting ETL AWS -S3
β’ Athena and Snowflake
β’ Control - M Scheduling (Optional ) Tools - IntelliJ,confuence, Jira