

Astir IT Solutions, Inc.
ETL Developer (Pyspark)Only W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an ETL Developer (PySpark) in Whippany, NJ, hybrid (2 days onsite), with a contract length of "unknown" and a pay rate of "unknown." Requires 12+ years of ETL experience, 7+ years with PySpark, and expertise in SQL and cloud platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 17, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Whippany, NJ
-
🧠 - Skills detailed
#Teradata #Databases #PySpark #Code Reviews #Data Ingestion #Data Engineering #Data Modeling #Azure #Airflow #Scala #SQL (Structured Query Language) #Data Quality #Data Architecture #Debugging #Oracle #Snowflake #Python #GCP (Google Cloud Platform) #Spark SQL #SQL Server #Cloud #Data Orchestration #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Data Processing
Role description
Job Title: ETL Developer (Pyspark)Only w2
Location: Whippany, NJ (Hybrid – 2 days onsite per week)
Minimum 12+ Years of Experience required.
Key Responsibilities:
• Design, develop, and maintain robust ETL workflows using PySpark on large-scale data platforms.
• Optimize data processing pipelines for performance, scalability, and reliability.
• Collaborate with data architects, analysts, and application teams to understand data requirements and ensure smooth data flow across systems.
• Perform data quality checks, validation, and error handling to ensure accurate and reliable data ingestion.
• Develop and maintain reusable ETL frameworks and best practices for team adoption.
• Work with structured and unstructured data from multiple sources including APIs, flat files, and databases.
• Troubleshoot performance issues and provide production support for ETL jobs.
• Participate in code reviews and contribute to the continuous improvement of ETL processes and standards.
Required Qualifications:
• 12+ years of total experience in ETL development and data engineering.
• 7+ years of hands-on experience with PySpark in a production environment.
• Strong experience with Spark SQL, DataFrames, and RDD transformations.
• Proficiency in Python and experience with data orchestration tools (Airflow, Oozie, or similar).
• Expertise in SQL and working with large relational databases (e.g., Oracle, SQL Server, Snowflake, or Teradata).
• Solid understanding of data modeling, data warehousing concepts, and ETL best practices.
• Experience with cloud-based data platforms (AWS, Azure, or GCP) is highly preferred.
• Strong analytical, debugging, and problem-solving skills.
• Excellent communication and teamwork abilities.
If I missed your call ! Please drop me a mail.
Thank you,
Harish
Accounts Manager/Talent Acquisition
Astir IT Solutions, Inc - An E-Verified Company
Email:harishj@astirit.com
Direct : 7326946000
• 788
50 Cragwood Rd. Suite # 219, South Plainfield, NJ 07080
www.astirit.com
Job Title: ETL Developer (Pyspark)Only w2
Location: Whippany, NJ (Hybrid – 2 days onsite per week)
Minimum 12+ Years of Experience required.
Key Responsibilities:
• Design, develop, and maintain robust ETL workflows using PySpark on large-scale data platforms.
• Optimize data processing pipelines for performance, scalability, and reliability.
• Collaborate with data architects, analysts, and application teams to understand data requirements and ensure smooth data flow across systems.
• Perform data quality checks, validation, and error handling to ensure accurate and reliable data ingestion.
• Develop and maintain reusable ETL frameworks and best practices for team adoption.
• Work with structured and unstructured data from multiple sources including APIs, flat files, and databases.
• Troubleshoot performance issues and provide production support for ETL jobs.
• Participate in code reviews and contribute to the continuous improvement of ETL processes and standards.
Required Qualifications:
• 12+ years of total experience in ETL development and data engineering.
• 7+ years of hands-on experience with PySpark in a production environment.
• Strong experience with Spark SQL, DataFrames, and RDD transformations.
• Proficiency in Python and experience with data orchestration tools (Airflow, Oozie, or similar).
• Expertise in SQL and working with large relational databases (e.g., Oracle, SQL Server, Snowflake, or Teradata).
• Solid understanding of data modeling, data warehousing concepts, and ETL best practices.
• Experience with cloud-based data platforms (AWS, Azure, or GCP) is highly preferred.
• Strong analytical, debugging, and problem-solving skills.
• Excellent communication and teamwork abilities.
If I missed your call ! Please drop me a mail.
Thank you,
Harish
Accounts Manager/Talent Acquisition
Astir IT Solutions, Inc - An E-Verified Company
Email:harishj@astirit.com
Direct : 7326946000
• 788
50 Cragwood Rd. Suite # 219, South Plainfield, NJ 07080
www.astirit.com