

Galent
Lead Data Engineer (Palantir, Pyspark)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer (Palantir, PySpark) in Jersey City, NJ, on a hybrid contract for 6–8 years of experience. Key skills include Palantir Foundry, PySpark, SQL, and knowledge of reinsurance data environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 26, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Data Engineering #Data Ingestion #Automation #Cloud #PySpark #S3 (Amazon Simple Storage Service) #Python #Data Pipeline #Compliance #GIT #Metadata #Spark (Apache Spark) #Data Security #Business Analysis #GCP (Google Cloud Platform) #Snowflake #Data Governance #Data Lineage #REST (Representational State Transfer) #REST services #SQL (Structured Query Language) #Data Architecture #Spark SQL #Databricks #Palantir Foundry #Version Control #Azure #AWS (Amazon Web Services) #Scala #Data Integration #Agile #Security #"ETL (Extract #Transform #Load)" #Data Modeling #Data Management #Datasets
Role description
We have an immediate opening for a Lead Data Engineer (Palantir, Pyspark) with a leading IT service/solutions provider in Jersey City, NJ.
Job Title – Lead Data Engineer (Palantir, Pyspark)
Location: Jersey City
Work mode: Hybrid (4 days a week onsite)
Key Skills: Palantir, Pyspark
Lead Data Engineer – Palantir & PySpark
Job Summary:
We are seeking a highly skilled Data Engineer with hands-on experience in Palantir (Foundry preferred), PySpark, and exposure to reinsurance or insurance data environments. The ideal candidate will play a key role in building scalable data pipelines, optimizing ETL workflows, and enabling advanced analytics and reporting capabilities. This role requires a strong technical foundation in data engineering combined with an understanding of the reinsurance business domain.
Key Responsibilities:
• Design, develop, and maintain data pipelines and ETL workflows using PySpark, SQL, and Palantir Foundry.
• Collaborate with data architects, business analysts, and actuarial teams to understand reinsurance data models and transform complex datasets into usable formats.
• Build and optimize data ingestion, transformation, and validation processes to support analytical and reporting use cases.
• Work within the Palantir Foundry platform to design robust workflows, manage datasets, and ensure efficient data lineage and governance.
• Ensure data security, compliance, and governance in line with industry and client standards.
• Identify opportunities for automation and process improvement across data systems and integrations.
Required Skills & Qualifications:
• 6–8 years of overall experience in data engineering roles.
• Strong hands-on expertise in PySpark (dataframes, RDDs, performance optimization).
• Proven experience working with Palantir Foundry or similar data integration platforms.
• Good understanding of reinsurance including exposure, claims, and policy data structures.
• Proficiency in SQL, Python, and working with large datasets in distributed environments.
• Experience with cloud platforms (AWS, Azure, or GCP) and related data services (e.g., S3, Snowflake, Databricks).
• Knowledge of data modeling, metadata management, and data governance frameworks.
• Familiarity with CI/CD pipelines, version control (Git), and Agile delivery methodologies.
Preferred Skills:
• Experience with data warehousing and reporting modernization projects in the reinsurance domain.
• Exposure to Palantir ontology design and data operationalization.
• Working knowledge of APIs, REST services, and event-driven architecture.
• Understanding of actuarial data flows, submission processes, and underwriting analytics is a plus.
We have an immediate opening for a Lead Data Engineer (Palantir, Pyspark) with a leading IT service/solutions provider in Jersey City, NJ.
Job Title – Lead Data Engineer (Palantir, Pyspark)
Location: Jersey City
Work mode: Hybrid (4 days a week onsite)
Key Skills: Palantir, Pyspark
Lead Data Engineer – Palantir & PySpark
Job Summary:
We are seeking a highly skilled Data Engineer with hands-on experience in Palantir (Foundry preferred), PySpark, and exposure to reinsurance or insurance data environments. The ideal candidate will play a key role in building scalable data pipelines, optimizing ETL workflows, and enabling advanced analytics and reporting capabilities. This role requires a strong technical foundation in data engineering combined with an understanding of the reinsurance business domain.
Key Responsibilities:
• Design, develop, and maintain data pipelines and ETL workflows using PySpark, SQL, and Palantir Foundry.
• Collaborate with data architects, business analysts, and actuarial teams to understand reinsurance data models and transform complex datasets into usable formats.
• Build and optimize data ingestion, transformation, and validation processes to support analytical and reporting use cases.
• Work within the Palantir Foundry platform to design robust workflows, manage datasets, and ensure efficient data lineage and governance.
• Ensure data security, compliance, and governance in line with industry and client standards.
• Identify opportunities for automation and process improvement across data systems and integrations.
Required Skills & Qualifications:
• 6–8 years of overall experience in data engineering roles.
• Strong hands-on expertise in PySpark (dataframes, RDDs, performance optimization).
• Proven experience working with Palantir Foundry or similar data integration platforms.
• Good understanding of reinsurance including exposure, claims, and policy data structures.
• Proficiency in SQL, Python, and working with large datasets in distributed environments.
• Experience with cloud platforms (AWS, Azure, or GCP) and related data services (e.g., S3, Snowflake, Databricks).
• Knowledge of data modeling, metadata management, and data governance frameworks.
• Familiarity with CI/CD pipelines, version control (Git), and Agile delivery methodologies.
Preferred Skills:
• Experience with data warehousing and reporting modernization projects in the reinsurance domain.
• Exposure to Palantir ontology design and data operationalization.
• Working knowledge of APIs, REST services, and event-driven architecture.
• Understanding of actuarial data flows, submission processes, and underwriting analytics is a plus.






