Empiric

Palantir Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Data Engineer with 6-10 years of experience, focusing on PySpark and reinsurance data. The contract lasts 3-6 months, offers $70-$75 per hour, and is remote. Key skills include SQL, Python, and cloud platforms.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
600
-
πŸ—“οΈ - Date
November 11, 2025
πŸ•’ - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Security #Data Lineage #Spark (Apache Spark) #Databricks #Data Architecture #"ETL (Extract #Transform #Load)" #Metadata #SQL (Structured Query Language) #Automation #Data Engineering #Agile #Data Integration #Business Analysis #AWS (Amazon Web Services) #Datasets #Data Governance #Spark SQL #Cloud #Data Ingestion #PySpark #Python #GCP (Google Cloud Platform) #GIT #REST services #Data Management #Snowflake #S3 (Amazon Simple Storage Service) #Palantir Foundry #Azure #Data Pipeline #REST (Representational State Transfer) #Data Security #Compliance #Data Modeling #Version Control
Role description
Palantir Data Engineer - PySpark - Insurance Experience Contract: 3-6 months initial Rate: $70-$75 per hour Location: Remote Job Description: β€’ Develop, design, and sustain data pipelines and ETL workflows using PySpark, SQL, and Palantir Foundry. β€’ Partner with data architects, business analysts, and actuarial teams to grasp reinsurance data models and convert intricate datasets into accessible formats. β€’ Create and enhance processes for data ingestion, transformation, and validation to support analytical and reporting objectives. β€’ Utilize the Palantir Foundry platform to craft robust workflows, manage datasets, and guarantee efficient data lineage and governance. β€’ Uphold data security, compliance, and governance in accordance with industry standards and client requirements. β€’ Spot opportunities for automation and process enhancements across data systems and integrations. Required Skills & Qualifications: β€’ 6 to 10 years of experience in data engineering roles. β€’ Strong practical knowledge of PySpark (including dataframes, RDDs, and performance optimization). β€’ Demonstrated experience with Palantir Foundry or comparable data integration platforms. β€’ Solid understanding of reinsurance, including exposure, claims, and policy data structures. β€’ Proficiency in SQL and Python, along with experience handling large datasets in distributed environments. β€’ Familiarity with cloud platforms (AWS, Azure, or GCP) and their associated data services (e.g., S3, Snowflake, Databricks). β€’ Knowledge of data modeling, metadata management, and data governance frameworks. β€’ Experience with CI/CD pipelines, version control systems (Git), and Agile delivery methodologies. Preferred Skills: β€’ Experience in data warehousing and reporting modernization projects within the reinsurance sector. β€’ Familiarity with Palantir ontology design and data operationalization. β€’ Working knowledge of APIs, REST services, and event-driven architecture. β€’ Understanding of actuarial data flows, submission processes, and underwriting analytics is a plus.