

Merit Consulting Group Inc
Palantir Foundry Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Foundry Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Requires 7+ years of data engineering experience, including 4+ years in Palantir Foundry, SQL, and PySpark/Scala expertise.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 12, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Tennessee, United States
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #GIT #Scala #Documentation #Data Pipeline #Data Governance #Automated Testing #GCP (Google Cloud Platform) #Observability #Spark (Apache Spark) #Datasets #Data Quality #"ETL (Extract #Transform #Load)" #Palantir Foundry #SQL (Structured Query Language) #Cloud #PySpark #Security #Datadog #Azure #AWS (Amazon Web Services) #Monitoring #Data Engineering
Role description
Experience: 7+ years (must have 4+ years in Palantir Foundry)
Job Description
We are looking for an experienced Palantir Foundry Data Engineer who can design ontology-based data products, develop reliable data pipelines, and operationalize them for enterprise use. The ideal candidate will have strong hands-on experience in Foundry Ontology, Code Workbooks, Pipelines, and Materializations, along with expertise in SQL, PySpark/Scala, and data governance. This role involves building high-quality datasets, optimizing performance, ensuring data reliability, and supporting applications and AIP/Actions use cases.
Key Responsibilities
• Design and maintain ontology models, object types, relationships, and governed datasets.
• Build and optimize data pipelines using SQL, PySpark/Scala, and Foundry Code Workbooks.
• Develop incremental, snapshot, and multi-stage pipelines with backfill and recovery strategies.
• Configure schedules, SLAs/SLOs, alerts, monitoring, and data quality checks.
• Implement RBAC, security controls, PII handling, and data governance policies.
• Improve performance through partitioning, caching, materialization, and file optimization.
• Use Git-based workflows, versioning, CI/CD, and automated testing for transforms.
• Expose ontology-backed datasets to Workshop/Slate apps and integrate with AIP agents.
• Handle incident response, root cause analysis, and prepare runbooks.
• Create documentation, playbooks, and reusable frameworks; mentor team members.
Required Skills
• 7+ years in data engineering/analytics engineering.
• Minimum 4 years of hands-on experience with Palantir Foundry (Ontology, Pipelines, Code Workbooks, Permissions, Materializations).
• Strong SQL and PySpark/Scala coding ability.
• Experience with SLAs, monitoring, alerting, and data quality frameworks.
• Proficiency with Git, CI/CD pipelines, testing, and environment promotion.
• Experience with cloud platforms (AWS/Azure/GCP) and file formats like Parquet/Delta.
• Strong understanding of data governance, PII handling, and security models.
Nice to Have
• Experience building Workshop/Slate interfaces or AIP/Actions integrations.
• Knowledge of Kafka/Kinesis for streaming ingestion.
• Familiarity with observability tools such as Datadog or CloudWatch.
• Experience creating reusable templates, standards, or framework components.
Experience: 7+ years (must have 4+ years in Palantir Foundry)
Job Description
We are looking for an experienced Palantir Foundry Data Engineer who can design ontology-based data products, develop reliable data pipelines, and operationalize them for enterprise use. The ideal candidate will have strong hands-on experience in Foundry Ontology, Code Workbooks, Pipelines, and Materializations, along with expertise in SQL, PySpark/Scala, and data governance. This role involves building high-quality datasets, optimizing performance, ensuring data reliability, and supporting applications and AIP/Actions use cases.
Key Responsibilities
• Design and maintain ontology models, object types, relationships, and governed datasets.
• Build and optimize data pipelines using SQL, PySpark/Scala, and Foundry Code Workbooks.
• Develop incremental, snapshot, and multi-stage pipelines with backfill and recovery strategies.
• Configure schedules, SLAs/SLOs, alerts, monitoring, and data quality checks.
• Implement RBAC, security controls, PII handling, and data governance policies.
• Improve performance through partitioning, caching, materialization, and file optimization.
• Use Git-based workflows, versioning, CI/CD, and automated testing for transforms.
• Expose ontology-backed datasets to Workshop/Slate apps and integrate with AIP agents.
• Handle incident response, root cause analysis, and prepare runbooks.
• Create documentation, playbooks, and reusable frameworks; mentor team members.
Required Skills
• 7+ years in data engineering/analytics engineering.
• Minimum 4 years of hands-on experience with Palantir Foundry (Ontology, Pipelines, Code Workbooks, Permissions, Materializations).
• Strong SQL and PySpark/Scala coding ability.
• Experience with SLAs, monitoring, alerting, and data quality frameworks.
• Proficiency with Git, CI/CD pipelines, testing, and environment promotion.
• Experience with cloud platforms (AWS/Azure/GCP) and file formats like Parquet/Delta.
• Strong understanding of data governance, PII handling, and security models.
Nice to Have
• Experience building Workshop/Slate interfaces or AIP/Actions integrations.
• Knowledge of Kafka/Kinesis for streaming ingestion.
• Familiarity with observability tools such as Datadog or CloudWatch.
• Experience creating reusable templates, standards, or framework components.





