Oncorre Inc

Lead Data Engineer – Palantir & PySpark (Remote) – Experience in Reinsurance Domain

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with 6–10 years of experience, focusing on Palantir and PySpark in the reinsurance domain. Remote work offered, requiring expertise in SQL, cloud platforms, and data governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 5, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New Jersey, United States
-
🧠 - Skills detailed
#Data Governance #Data Integration #Data Ingestion #Automation #Databricks #Scala #Snowflake #Spark SQL #SQL (Structured Query Language) #Agile #Datasets #Data Architecture #REST (Representational State Transfer) #GCP (Google Cloud Platform) #Data Management #"ETL (Extract #Transform #Load)" #Data Lineage #PySpark #Python #Business Analysis #Version Control #GIT #Spark (Apache Spark) #Data Pipeline #Security #Data Security #Palantir Foundry #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #REST services #Compliance #Metadata #Azure #Data Engineering #Cloud #Data Modeling
Role description
Lead Data Engineer – Palantir & PySpark Experience: 6–10 Years Location : Remote Client Industry: Reinsurance --- Job Summary: We are seeking a highly skilled Data Engineer with hands-on experience in Palantir (Foundry preferred), PySpark, and exposure to reinsurance or insurance data environments. The ideal candidate will play a key role in building scalable data pipelines, optimizing ETL workflows, and enabling advanced analytics and reporting capabilities. This role requires a strong technical foundation in data engineering combined with an understanding of the reinsurance business domain. --- Key Responsibilities: · Design, develop, and maintain data pipelines and ETL workflows using PySpark, SQL, and Palantir Foundry. · Collaborate with data architects, business analysts, and actuarial teams to understand reinsurance data models and transform complex datasets into usable formats. · Build and optimize data ingestion, transformation, and validation processes to support analytical and reporting use cases. · Work within the Palantir Foundry platform to design robust workflows, manage datasets, and ensure efficient data lineage and governance. · Ensure data security, compliance, and governance in line with industry and client standards. · Identify opportunities for automation and process improvement across data systems and integrations. --- Required Skills & Qualifications: · 6–10 years of overall experience in data engineering roles. · Strong hands-on expertise in PySpark (dataframes, RDDs, performance optimization). · Proven experience working with Palantir Foundry or similar data integration platforms. · Good understanding of reinsurance including exposure, claims, and policy data structures. · Proficiency in SQL, Python, and working with large datasets in distributed environments. · Experience with cloud platforms (AWS, Azure, or GCP) and related data services (e.g., S3, Snowflake, Databricks). · Knowledge of data modeling, metadata management, and data governance frameworks. · Familiarity with CI/CD pipelines, version control (Git), and Agile delivery methodologies. --- Preferred Skills: · Experience with data warehousing and reporting modernization projects in the reinsurance domain. · Exposure to Palantir ontology design and data operationalization. · Working knowledge of APIs, REST services, and event-driven architecture. · Understanding of actuarial data flows, submission processes, and underwriting analytics is a plus.