Lynn Rodens

Palantir Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Data Engineer (Remote/Hybrid) with a contract length of "unknown" and a pay rate of "unknown." Requires 2–3 years of data engineering experience, proficiency in Python, PySpark, SQL, and Palantir Foundry certification.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 3, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Mount Holly, NJ
-
🧠 - Skills detailed
#Security #Data Ingestion #Docker #Agile #AWS S3 (Amazon Simple Storage Service) #Palantir Foundry #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Spark SQL #Data Pipeline #Data Processing #Scala #Data Engineering #GIT #Data Modeling #Datasets #Jira #SQL (Structured Query Language) #Kubernetes #Python #PySpark #S3 (Amazon Simple Storage Service) #Big Data #AWS (Amazon Web Services) #Kafka (Apache Kafka) #AI (Artificial Intelligence)
Role description
Palantir Data Engineer (Remote/Hybrid) We’re seeking a skilled Palantir Data Engineer to design, build, and scale high-impact data solutions within the Palantir Foundry platform. In this role, you’ll turn complex, large-scale data into reliable, production-ready analytics that power AI-enabled decision making. What You’ll Do • Build, maintain, and optimize scalable data pipelines and workflows in Palantir Foundry using Python, PySpark, and SQL • Develop data models and ontologies, mapping datasets to objects with strong governance, lineage, and security • Monitor and tune performance for large-scale data processing environments • Partner with business stakeholders and engineering teams to translate requirements into technical solutions • Support platform reliability by managing data ingestion, integration, and overall Foundry uptime What You Bring • 2–3 years of experience in data engineering, ETL workflows, and big data technologies (e.g., Spark, Kafka, AWS) • Strong proficiency in Python, PySpark, SQL, and data modeling best practices • Hands-on Palantir experience, including Foundry certification (Code Repos, Workbooks, Pipeline Builder) • Experience working in Agile environments with large, complex datasets • Familiarity with collaboration and versioning tools such as JIRA, Git, and Confluence Tech Stack • Platforms & Tools: Palantir Foundry, Docker/Kubernetes, AWS (S3, Glue) Why This Role You’ll work on meaningful, high-visibility data initiatives, collaborate with cross-functional teams, and build solutions that move from concept to production. If you enjoy solving complex data challenges and delivering analytics that drive real-world impact, this role offers both technical depth and growth opportunity.