InfoVision Inc.

Palantir Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Data Engineer in Dallas, TX, on a hybrid contract. It requires 5+ years of experience, proficiency in Palantir Foundry, and strong Python skills. A Bachelor's degree in a related field is essential; a Master's is preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 3, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Data Engineering #Big Data #Linux #Programming #Data Cleaning #Java #Databricks #Deployment #Palantir Foundry #Computer Science #Data Pipeline #Streamlit #Data Science #Python #ML (Machine Learning) #Spark (Apache Spark) #MLflow #Agile #Scala #Forecasting #Classification #Model Deployment #Libraries #ML Ops (Machine Learning Operations) #AI (Artificial Intelligence) #Unix #Monitoring #Cloud
Role description
Palantir Data Engineer – Dallas, TX Hybrid Job Description We are seeking a skilled Palantir Data Engineer to join our data and AI team in Dallas, TX. In this role, you will design and deploy scalable data pipelines, ontologies, and machine learning models using Palantir Foundry and other modern data platforms. You will work closely with stakeholders to integrate data, optimize workflows, and enable AI-powered decisions across critical functions such as supply chain, defense, and operations. Responsibilities β€’ Build and maintain data pipelines and workflows in Palantir Foundry, integrating structured and unstructured data. β€’ Develop and manage ontology models and object logic within Foundry to align with business use cases. β€’ Design, train, and deploy ML models for classification, optimization, and forecasting use cases. β€’ Apply feature engineering, data cleaning, and modeling techniques using Python, Spark, and ML libraries. β€’ Support model deployment, monitoring, and ML Ops using tools like MLflow and Databricks. β€’ Create dashboards and data applications using Slate or Streamlit to enable operational decision-making. β€’ Implement generative AI use cases using large language models (GPT-4, Claude, Cohere) and Retrieval-Augmented Generation (RAG). β€’ Collaborate with data scientists, engineers, and product owners to deliver end-to-end data solutions. β€’ Work directly with clients to define data requirements and guide platform enablement. β€’ Support agile project delivery and provide ongoing platform optimization post-deployment. Qualifications β€’ 5+ years of experience in data engineering, machine learning, or AI deployment. β€’ Proficiency with Palantir Foundry (pipelines, code workbooks, ontology modeling). β€’ Strong programming skills in Python, Scala, or Java; experience with Spark and cloud platforms. β€’ Hands-on experience with Unix/Linux environments and big data tools. β€’ Familiarity with ML Ops, model retraining, and monitoring best practices. β€’ Excellent communication and client engagement skills. β€’ Experience working in logistics, defense, energy, or public sector industries is a plus. Education Bachelor’s degree in Computer Science, Engineering, Data Science, or a related field. A Master’s degree is preferred.