

PySpark Developer - W2 Role
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a PySpark Developer in Iselin, NJ (Hybrid, 3 days onsite) with a contract length of 2+ years at a W2 pay rate. Requires 5+ years in data engineering, strong PySpark and SQL skills, and ETL experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 18, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Iselin, NJ
-
π§ - Skills detailed
#Data Quality #PySpark #Scala #Data Architecture #Data Extraction #Big Data #Python #Code Reviews #"ETL (Extract #Transform #Load)" #Spark SQL #Data Pipeline #Spark (Apache Spark) #SQL (Structured Query Language) #Database Systems #Compliance #SQL Queries #Deployment #Data Engineering
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: PySpark Developer
Location: Iselin, NJ (Hybrid 3 days per week onsite)
Duration: 2+ years (Potential for Full-Time Conversion with end client)
Interview Process: Video Interview - 1 and done
Tax Term: W2 Only
Job Description:
Client is seeking a highly skilled and experienced PySpark Developer to join our dynamic data engineering team. This role is ideal for a motivated professional with a strong background in big data technologies, ETL processes, and database systems. The successful candidate will play a key role in designing, developing, and optimizing data pipelines and analytics solutions that support critical business operations.
Key Responsibilities:
β’ Design, develop, and maintain scalable data pipelines using PySpark and Spark SQL.
β’ Collaborate with data architects, analysts, and business stakeholders to understand data requirements and deliver robust solutions.
β’ Optimize and troubleshoot complex ETL workflows and data transformation processes.
β’ Write efficient SQL queries for data extraction, transformation, and loading.
β’ Ensure data quality, integrity, and compliance with enterprise standards.
β’ Participate in code reviews, testing, and deployment activities.
β’ Monitor and maintain production workflows, including job scheduling and performance tuning.
Required Skills & Qualifications:
β’ 5+ years of experience in data engineering or software development.
β’ Strong proficiency in PySpark, Python, and Spark SQL.
β’ Solid experience with SQL and relational database systems.
β’ Proven expertise in ETL development and data pipeline orchestration.
β’ Familiarity with job scheduling tools; Autosys experience is a plus.
β’ Excellent problem-solving skills and attention to detail.
β’ Strong communication and collaboration abilities.