

Big Data Developer- Whippany, NJ
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer in Whippany, NJ, offering a contract length of "unknown" and a pay rate of "unknown." Required skills include Python, PySpark, and enterprise financial services experience, with expertise in ETL pipelines and data transformations.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 2, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Whippany, NJ
-
π§ - Skills detailed
#AWS EMR (Amazon Elastic MapReduce) #PySpark #Big Data #Kafka (Apache Kafka) #Python #Data Processing #Scala #"ETL (Extract #Transform #Load)" #.Net #Cloud #Data Engineering #AWS (Amazon Web Services) #Data Science #Datasets #Data Quality #Spark (Apache Spark) #Databricks #Hadoop #Data Transformations #Batch
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Python PySpark Big Data Developer
Whippany, NJ
Must - Enterprise Financial Services experience required
We are seeking aΒ Python PySpark Big Data DeveloperΒ with strong expertise in building scalable data processing solutions using PySpark and Python.
You will design, develop, and optimize high performance ETL pipelines to process large datasets in distributed environments, ensuring data quality, integration, and fault tolerance.
This role involves collaborating with data scientists, data engineers, and stakeholders to translate business requirements into robust data workflows
and implement complex data transformations using Spark, Hadoop, Kafka, or cloud platforms like AWS EMR or Databricks Strong problem solving skills,
hands on experience in performance tuning of PySpark jobs, and writing clean,
β’ maintainable Python code for both batch and streaming scenarios are essential.
mohan@synergent.net
mohan@synergent.net