Iris Software Inc.

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a long-term contract, located in Bethlehem, PA or Holmdel, NJ. Key skills include hands-on SQL, Python, PySpark, AWS, and Databricks. Proven data engineering experience is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 16, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Data Quality #Data Engineering #AWS (Amazon Web Services) #Data Pipeline #Debugging #Scala #SQL (Structured Query Language) #Cloud #Datasets #Databricks #PySpark #Data Extraction #Python #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Automation
Role description
Greetings! We are looking for a hands-on Data Engineer who can work directly with complex data models and build reliable data pipelines that support reporting and analytic use cases. This role requires someone who is comfortable writing production-grade code, working with object-oriented data structures, and analyzing data through advanced SQL. Location: Bethlehem PA OR Holmdel NJ Duration: Long term contract Client: One of the largest Insurance company in North America Must have skills: SQL (Hands on), Python (Hands on), Pyspark, AWS and Databricks. What You’ll Do β€’ Work with object-oriented data models and understand how they translate into physical structures. β€’ Write strong, optimized SQL to query, profile, and validate data across multiple sources. β€’ Build Python, PySpark, and SQL scripts to extract, transform, and load data into curated data assets. β€’ Support reporting, analytics, and cross-functional data use cases by delivering clean, well-structured datasets. β€’ Analyze data quality issues, troubleshoot pipeline failures, and recommend improvements. β€’ Collaborate with engineering, analytics, and product teams to understand data requirements and deliver scalable solutions. What We’re Looking For β€’ Proven experience as a Data Engineer with hands-on development (not design-only or oversight-only). β€’ Strong SQL skills. Ability to write complex joins, window functions, aggregations, and performance-tuned queries. β€’ Proficiency in Python and PySpark for data extraction, transformation, and automation. β€’ Ability to read and work with object-oriented data models and understand entity relationships. β€’ Experience building data pipelines in distributed or cloud environments is a plus. β€’ Solid understanding of data validation, data quality checks, and debugging techniques. β€’ Ability to work independently, communicate clearly, and collaborate with technical and non-technical teams. Best Regards,