Quantum World Technologies Inc.

Data Engineer (Ex. Palantir)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with past Palantir client experience, based in Dallas, TX. Contract length and pay rate are unspecified. Key skills include Python, SQL, ETL, and cloud architecture. Experience with data governance and big data is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 17, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Scala #Debugging #CRM (Customer Relationship Management) #Snowflake #SQL (Structured Query Language) #AWS (Amazon Web Services) #DevOps #Cloud #Data Transformations #Databases #Azure #GCP (Google Cloud Platform) #BO (Business Objects) #PySpark #Java #Security #Spark SQL #TypeScript #Compliance #JavaScript #Data Pipeline #Data Governance #Schema Design #GIT #Data Integration #Spark (Apache Spark) #Data Engineering #AI (Artificial Intelligence) #Business Objects #Version Control #Big Data #Distributed Computing #S3 (Amazon Simple Storage Service) #Python #"ETL (Extract #Transform #Load)"
Role description
Position : Data Engineer (Looking for ex Palantir) Location : Dallas, TX Candidate should have past Palantir client exp. JD- Data Integration & ETL: Build and manage scalable data pipelines to ingest data from diverse sources (ERP, CRM, APIs, S3, SQL databases) into Foundry. β€’ Ontology Modeling: Define and maintain the "Ontology"β€”the platform’s semantic layerβ€”which maps technical data to real-world business objects (e.g., "Aircraft," "Customer," or "Invoice"). β€’ Pipeline Development: Write and optimize data transformations using PySpark, SQL, or Java within Foundry's Code Repositories. β€’ Application Building: Develop front-end operational applications and interactive dashboards using low-code/pro-code tools like Workshop and Slate. β€’ AIP Integration: Implement Artificial Intelligence Platform (AIP) features, such as LLM-backed functions and agents, to automate workflows. β€’ Data Governance & Security: Configure granular access controls, data health monitors, and lineage tracking to ensure compliance and reliability. ## Core Technical Skills β€’ Languages: High proficiency in Python (PySpark) and SQL is mandatory. Knowledge of Java, TypeScript, or JavaScript is often required for front-end customization. β€’ Big Data: Understanding of distributed computing (Spark), data warehousing concepts, and schema design (Star, Snowflake, etc.). β€’ DevOps: Experience with Git-based version control, CI/CD practices, and debugging complex data workflows. β€’ Cloud Architecture: Familiarity with AWS, Azure, or GCP environments where Foundry is typically hosted.