Iris Software Inc.

Senior ETL Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior ETL Developer with over 12 years of experience, focusing on ETL, Data Warehousing, PySpark, and Python. The long-term contract is based in Whippany, New Jersey, requiring onsite work 2-3 days a week.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 1, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Whippany, NJ
-
🧠 - Skills detailed
#Data Mart #Ab Initio #Data Manipulation #Data Modeling #PySpark #Database Design #"ETL (Extract #Transform #Load)" #Scripting #Data Extraction #Shell Scripting #Automation #Data Warehouse #Spark (Apache Spark) #Unix #Data Integration #Programming #Data Pipeline #Python #SQL (Structured Query Language) #Data Lake
Role description
We are hiring for Senior ETL Developer Title : Senior ETL Developer Required Skills : ETL , Data Warehouse, Pyspark & Python Location : Whippany New Jersey (2 to 3 days required to work onsite) Over 12 years of experience required It’s a long term contract role Please email me your resume at ankit.grover01@irissoftware.com Note from Manager : Client is migrating from Ab Initio to PySpark and needs an architect/ Developer who could look at the Ab Initio architecture and come up with recommended design in PySpark location: Whippany should come to office at least two to three days in office. Key Responsibilities: β€’ We are seeking an experienced and detail-oriented ETL Developer to design, build, and maintain data integration solutions that enable seamless data flow across systems. The ideal candidate will have expertise in ETL tools, data modeling, and SQL, and a deep understanding of data warehousing concepts. β€’ Design, develop, and maintain ETL processes for extracting, transforming, and loading data from multiple sources into target systems (data warehouses, data marts, or data lakes). β€’ Designing ETL Processes: They design the ETL process according to business requirements, ensuring efficient data flow from source to destination. This includes selecting the right tools and methodologies for data extraction, transformation, and loading. β€’ Analyze and Document: Evaluate existing Ab Initio graphs, plans, and transformation logic to understand business requirements and data flows. β€’ Design and Develop: Design and develop new, optimized ETL processes and data pipelines using PySpark scripts and modules to replicate the functionality of legacy Ab Initio jobs. β€’ Strong programming/scripting skills in Python and Unix/Shell scripting for automation and custom logic. β€’ Expertise in SQL for complex querying, data manipulation, and validation. β€’ Solid understanding of data warehousing concepts, dimensional modeling, and database design principles. β€’ Proficiency in PySpark Thanks and Regards, Ankit Grover Sr. Executive - TA Iris Software 200 Metroplex Drive, Suite #300, Edison, NJ 08817 Email : ankit.grover01@irissoftware.com Phone : 973-370-7012 www.irissoftware.com