WorkTrust Solutions

Palantir Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Data Engineer (Remote) with a 6-month contract, offering a pay rate of "unknown." Required skills include SQL, Python, and experience with big data frameworks. U.S. citizenship is mandatory; preferred experience includes Palantir Foundry and cloud platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 11, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Deployment #Visualization #Palantir Foundry #Data Engineering #SQL Queries #Programming #Apache Spark #"ETL (Extract #Transform #Load)" #Python #PySpark #Databricks #Data Modeling #AWS (Amazon Web Services) #Snowflake #SAP #Spark (Apache Spark) #Datasets #ML (Machine Learning) #Data Pipeline #Data Lake #Data Processing #Azure #Cloud #Data Science #GCP (Google Cloud Platform) #SQL (Structured Query Language) #Big Data #Leadership
Role description
We are seeking a talented Data Engineer to join our team and work on an existing Palantir deployment. Job Title: Palantir Data Engineer (Remote) Duration: • 6 months for contract positions with extension or perm option Eligibility: • U.S. Citizens only Responsibilities: • Data Engineering: Design, develop, and maintain data pipelines within the Palantir platform. • Data Transformation and Analysis: Write efficient SQL queries and leverage Python and PySpark to transform and analyze large datasets. • Data Visualization: Work with existing visualizations within the Palantir platform, potentially enhancing or creating new ones. • Code Management: Contribute to the code repository, ensuring code quality and maintainability. • Project Support: Assist with the ongoing maintenance and optimization of existing Palantir use cases. • Future Projects: Contribute to the development of additional projects, including potential work in Supply Chain Management (SCM) and Estimate at Completion (EAC) analysis. • Ontology Optimization: Refactor and optimize existing ontologies. • Data Pipeline Management: Manage and optimize data pipelines, including external data sources. • Scheduled Job Optimization: Analyze and resolve scheduling conflicts within existing scheduled jobs. • Advisory Role: Provide seasoned advisory and leadership in Palantir, addressing the client's lack of Palantir knowledge and ensuring best practices. Required Skills: • Core Skills:Strong proficiency in SQL and Python programming languages. • Hands-on experience with data warehousing and data lakes. • Key Skills:Experience with big data processing frameworks like Apache Spark and PySpark. • Familiarity with data visualization tools and techniques. • Understanding of data modeling and data warehousing concepts. • Experience with Snowflake and Databricks. • Code repository experience. Preferred Skills: • Experience with Palantir Foundry and its components (Gotham, Foundry, Foundry Virtual). • Knowledge of cloud platforms (AWS, Azure, GCP). • Experience with machine learning and data science. • Former Palantir employees. • SAP experience. Team Environment: • Collaborative environment involving multiple parties. • Total team size: 40-60 people. • Candidates must be able to work independently and collaboratively. • Candidates must be politically aware. • Consultants will act as independent advisors to ensure the client's best interests.