NITYA Software Solutions Inc

Palantir Foundry Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Foundry Engineer in Nashville, TN, requiring 12+ years in data engineering, 4+ years in Palantir Foundry, and expertise in SQL, PySpark/Scala. Contract length and pay rate are unspecified.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 17, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Nashville, TN
-
🧠 - Skills detailed
#Observability #SQL (Structured Query Language) #Cloud #Datadog #AI (Artificial Intelligence) #AWS (Amazon Web Services) #Spark (Apache Spark) #Kafka (Apache Kafka) #Datasets #Data Quality #Scala #Azure #Security #Data Governance #PySpark #Palantir Foundry #GCP (Google Cloud Platform) #GIT #Data Engineering #Data Ingestion
Role description
πŸš€ Hiring: Palantir Foundry Engineer (Healthcare) πŸ“ Location: Nashville, TN (Onsite – Day 1) πŸ₯ Domain: Healthcare / Life Sciences Email:-harshavardhan@nityainc.com We’re looking for a senior Palantir Foundry Engineer with strong hands-on experience building ontology-driven data products and production-grade pipelines in healthcare environments. This role owns the end-to-end lifecycle β€” from raw data ingestion to governed, reusable datasets powering operational apps and AI workflows. Key Responsibilities β€’ Design Foundry Ontology (object types, relationships, schema evolution, lineage) β€’ Build and optimize Foundry pipelines & materializations using SQL and PySpark/Scala β€’ Implement incremental, snapshot, backfill, and recovery patterns β€’ Operationalize pipelines with SLAs/SLOs, alerts, data quality checks β€’ Enforce data governance, RBAC, PII masking, least-privilege access β€’ Optimize performance (joins, partitions, Delta/Parquet, cost controls) β€’ Enable SDLC best practices: Git, CI/CD, testing, reusable templates β€’ Expose ontology-backed data to Workshop / Slate apps, AIP & Actions β€’ Own reliability, incident response, and production runbooks β€’ Mentor teams and document Foundry best practices Required Skills & Experience β€’ 12+ years in data engineering with 4+ years hands-on Palantir Foundry β€’ Deep expertise in Foundry Ontology, Code Workbooks, Pipelines, Lineage β€’ Strong SQL + PySpark/Scala within Foundry β€’ Experience with SLAs, alerting, data quality frameworks β€’ Git-based SDLC, CI/CD, environment promotion β€’ Cloud platforms (AWS / Azure / GCP) and Delta / Parquet β€’ Strong understanding of data governance & security ⭐ Nice to Have β€’ Workshop / Slate app development β€’ AIP, Actions, or AI-driven Foundry use cases β€’ Streaming ingestion (Kafka/Kinesis) β€’ Observability & cost governance (Datadog, CloudWatch, FinOps)