

Iris Software Inc.
AWS Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer on a contract basis in Bethlehem, PA / Holmdel NJ / New York, NY (Hybrid). Requires expert SQL, Python, PySpark, Databricks experience, and familiarity with the Medallion Architecture. Pay rate unspecified.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 9, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
New Jersey, United States
-
π§ - Skills detailed
#Code Reviews #Data Pipeline #Data Engineering #Spark (Apache Spark) #Data Architecture #PySpark #Datasets #Python #Databases #SQL (Structured Query Language) #Databricks #AWS (Amazon Web Services) #Oracle #Cloud #"ETL (Extract #Transform #Load)" #Data Lake #Data Analysis
Role description
Our direct end client which is one of the largest mutual life insurance company is urgently looking to hire AWS Data Engineer -Bethlehem, PA / Holmdel NJ / New York, NY (Hybrid). This is a Contract opportunity.
AWS Data Engineer
Location: Bethlehem, PA / Holmdel NJ / New York, NY (Hybrid/ 3 days office & 2 days work from home)
Nature of Contract: Contract opportunity
We are seeking a hands-on-keyboard Data Engineer to build core data assets for a new third-party billing, commission, and claims application. You will design and execute data pipelines to move data into a Databricks lakehouse environment, creating reusable, reporting-ready data structures to support analytics and reporting.
Core Mission:
Move and transform data from source systems into a Databricks-based clientβs Data Lake using the Medallion Architecture (Bronze β Silver layers). Build standardized, domain-aligned tablesβnot one-off datasetsβto serve 40-50 planned reports and future analytics.
Key Responsibilities:
β’ Write complex, optimized SQL to interrogate Oracle databases and explore raw data in Databricks.
β’ Develop and deploy production Python/PySpark scripts to build and extend data transformation pipelines.
β’ Analyze data attributes for planned reports; identify gaps and extend schemas accordingly.
β’ Build reusable, generalized data assets following architectural standards for a warehouse-like layer.
β’ Work within AWS cloud infrastructure (Databricks on AWS) and follow existing frameworks and patterns.
β’ Collaborate with data architects and participate in peer code reviews.
Technical Requirements (Must-Haves):
β’ Expert-level SQL proficiency β This is the most critical skill.
β’ Hands-on development experience with Python and PySpark for building data pipelines.
β’ Practical experience with Databricks and the Medallion Architecture (Bronze/Silver/Gold layers).
β’ AWS cloud platform experience.
β’ Strong data analysis skills: ability to explore data and perform attribute-level analysis.
β’ Experience understanding object-oriented data models and working with transactional databases (e.g., Oracle).
Looking forward to hear from you...!!!
Best Regards,
Bharat Sharma
Sr. Talent Acquisition β Executive
Iris Software
200 Metroplex Drive, Suite #300
Edison, NJ 08817
Mail: bharat.sharma@irissoftware.com | www.irissoftware.com
Our direct end client which is one of the largest mutual life insurance company is urgently looking to hire AWS Data Engineer -Bethlehem, PA / Holmdel NJ / New York, NY (Hybrid). This is a Contract opportunity.
AWS Data Engineer
Location: Bethlehem, PA / Holmdel NJ / New York, NY (Hybrid/ 3 days office & 2 days work from home)
Nature of Contract: Contract opportunity
We are seeking a hands-on-keyboard Data Engineer to build core data assets for a new third-party billing, commission, and claims application. You will design and execute data pipelines to move data into a Databricks lakehouse environment, creating reusable, reporting-ready data structures to support analytics and reporting.
Core Mission:
Move and transform data from source systems into a Databricks-based clientβs Data Lake using the Medallion Architecture (Bronze β Silver layers). Build standardized, domain-aligned tablesβnot one-off datasetsβto serve 40-50 planned reports and future analytics.
Key Responsibilities:
β’ Write complex, optimized SQL to interrogate Oracle databases and explore raw data in Databricks.
β’ Develop and deploy production Python/PySpark scripts to build and extend data transformation pipelines.
β’ Analyze data attributes for planned reports; identify gaps and extend schemas accordingly.
β’ Build reusable, generalized data assets following architectural standards for a warehouse-like layer.
β’ Work within AWS cloud infrastructure (Databricks on AWS) and follow existing frameworks and patterns.
β’ Collaborate with data architects and participate in peer code reviews.
Technical Requirements (Must-Haves):
β’ Expert-level SQL proficiency β This is the most critical skill.
β’ Hands-on development experience with Python and PySpark for building data pipelines.
β’ Practical experience with Databricks and the Medallion Architecture (Bronze/Silver/Gold layers).
β’ AWS cloud platform experience.
β’ Strong data analysis skills: ability to explore data and perform attribute-level analysis.
β’ Experience understanding object-oriented data models and working with transactional databases (e.g., Oracle).
Looking forward to hear from you...!!!
Best Regards,
Bharat Sharma
Sr. Talent Acquisition β Executive
Iris Software
200 Metroplex Drive, Suite #300
Edison, NJ 08817
Mail: bharat.sharma@irissoftware.com | www.irissoftware.com






