Softcom Systems Inc

Palantir Foundry Data Engineer :: W2_Contract ::Remote

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Foundry Data Engineer on a W2 contract, remote, requiring over 10 years of experience. Key skills include Palantir Foundry, Databricks, PySpark, and Python, with a focus on ETL/ELT workflows and data governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 25, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Denver, CO
-
🧠 - Skills detailed
#PySpark #Cloud #SQL (Structured Query Language) #Data Quality #Delta Lake #Data Science #Data Modeling #AWS (Amazon Web Services) #Datasets #Databricks #Security #Palantir Foundry #"ETL (Extract #Transform #Load)" #Data Engineering #Agile #Python #GIT #Compliance #Data Security #Scala #Data Pipeline #Spark (Apache Spark) #Azure #Deployment #Spark SQL
Role description
Detailed Job Description: We are looking for a versatile Data Engineer with strong experience in Palantir Foundry and modern data engineering tools such as Databricks, PySpark, and Python. This role involves designing and building scalable data pipelines, managing transformations, and enabling analytics and operational workflows across enterprise platforms. You will work closely with business stakeholders, data scientists, and product teams to deliver high-quality, governed, and reusable data assets that power decision-making and advanced analytics. Minimum years of experience • : > 10 years Key Responsibilities • Design, develop, and optimize data pipelines and transformations using Palantir Foundry (Code Workbook, Ontology, Objects) and Databricks (PySpark, SQL, Delta Lake). • Implement ETL/ELT workflows, ensuring data quality, lineage, and governance across platforms. • Model ontologies and object structures in Foundry to support operational and analytical use cases. • Collaborate with cross-functional teams to translate business requirements into scalable data solutions. • Automate workflows and CI/CD for data code and Foundry artifacts; manage permissions and operational deployments. • Optimize performance through partitioning, caching, and query tuning in PySpark and Databricks. • Document datasets, transformations, and business logic for transparency and reuse. • Ensure compliance with data security, privacy, and governance standards. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Required Qualifications • 8+ years of experience in data engineering. • Hands-on experience with Palantir Foundry (Code Workbook, Ontology, Objects). • Strong proficiency in PySpark, Python, and SQL. • Experience with Databricks, Delta Lake, and cloud platforms (Azure/AWS). • Solid understanding of ETL/ELT, data modeling, and performance optimization. • • Familiarity with Git, CI/CD, and agile delivery practices.