Empiric

Palantir Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Engineer in Washington, DC, with a 6-month contract at $100-$120 per hour. Requires TS/SCI Clearance, expertise in Palantir Foundry and Ontology, strong Python and PySpark skills, and public sector experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
960
-
πŸ—“οΈ - Date
December 17, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Yes
-
πŸ“ - Location detailed
Washington DC-Baltimore Area
-
🧠 - Skills detailed
#Spark (Apache Spark) #Data Architecture #Programming #Cloud #Data Science #Project Management #Visualization #Data Modeling #Data Processing #Palantir Foundry #Data Integration #Python #Datasets #Data Governance #PySpark #Computer Science
Role description
Palantir Engineer - TS/SCI Clearance Location: Washington, DC - 3/4 days onsite (will consider relocators) Rate: $100 - $120 per hour Contract: 6 months initial We are seeking a talented Palantir Engineer with extensive skills in Foundry and Ontology, complemented by a strong data background. The ideal candidate will have experience working in the public sector and will be instrumental in developing and implementing data-driven solutions to solve complex problems. Key Responsibilities β€’ Design, build, and maintain data models and workflows within Palantir Foundry. β€’ Collaborate with stakeholders to gather requirements and translate them into technical specifications. β€’ Utilize Palantir Ontology to ensure data integration and governance across various datasets. β€’ Develop and optimize data processing pipelines using Python and PySpark to handle large datasets efficiently. β€’ Create visualizations and dashboards to communicate insights and analysis effectively. β€’ Conduct training sessions and workshops for end-users to enhance their understanding and usage of Palantir tools. β€’ Ensure adherence to best practices and improve data architecture for optimal performance. Qualifications β€’ Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field. β€’ Proven experience with Palantir Foundry and Ontology. β€’ Strong programming skills in Python and experience with PySpark. β€’ Solid understanding of data analytics, data modeling, and data governance. β€’ Experience working in the public sector or with governmental organizations is highly desirable. β€’ Excellent problem-solving skills and the ability to think critically about complex data challenges. β€’ Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications β€’ Familiarity with cloud technologies and data warehousing solutions. β€’ Previous experience in project management or leading data initiatives. β€’ Knowledge of additional programming languages and data visualization tools.