

VLink Inc
Palantir Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Data Engineer in Dallas, TX (hybrid) with a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, SQL, ETL, and experience with Palantir clients.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 17, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Scala #Debugging #CRM (Customer Relationship Management) #Consulting #Snowflake #SQL (Structured Query Language) #AWS (Amazon Web Services) #DevOps #Cloud #"ETL (Extract #Transform #Load)" #Data Transformations #Databases #Azure #GCP (Google Cloud Platform) #BO (Business Objects) #PySpark #Java #Security #Spark SQL #TypeScript #Compliance #JavaScript #Data Pipeline #Data Governance #Schema Design #GIT #Data Integration #Spark (Apache Spark) #AI (Artificial Intelligence) #Business Objects #Version Control #Big Data #Distributed Computing #S3 (Amazon Simple Storage Service) #Python #Data Engineering
Role description
Description:
Job Title: Palantir Data Engineer
Location: Dallas, TX - hybrid
About VLink: Started in 2006 and headquartered in Connecticut, VLink is one of the fastest
growing digital technology services and consulting companies. Since its inception, our innovative
team members have been solving the most complex business, and IT challenges of our global
clients.
Job Description:
The candidate should have experience with Palantir clients.
Data Integration & ETL: Build and manage scalable data pipelines to ingest data from diverse sources (ERP, CRM, APIs, S3, SQL databases) into Foundry.
Ontology Modeling: Define and maintain the "Ontology"βthe platformβs semantic layerβwhich maps technical data to real-world business objects (e.g., "Aircraft," "Customer," or "Invoice").
Pipeline Development: Write and optimize data transformations using PySpark, SQL, or Java within Foundry's Code Repositories.
Application Building: Develop front-end operational applications and interactive dashboards using low-code/pro-code tools like Workshop and Slate.
AIP Integration: Implement Artificial Intelligence Platform (AIP) features, such as LLM-backed functions and agents, to automate workflows.
Data Governance & Security: Configure granular access controls, data health monitors, and lineage tracking to ensure compliance and reliability.
Core Technical Skills
Languages: High proficiency in Python (PySpark) and SQL is mandatory. Knowledge of Java, TypeScript, or JavaScript is often required for front-end customization.
Big Data: Understanding of distributed computing (Spark), data warehousing concepts, and schema design (Star, Snowflake, etc.).
DevOps: Experience with Git-based version control, CI/CD practices, and debugging complex data workflows.
Cloud Architecture: Familiarity with AWS, Azure, or GCP environments where Foundry is typically hosted.
Equal Employment Opportunity (EEO) Statement:
VLink is an equal opportunity employer committed to fostering an inclusive environment where diversity is celebrated. All qualified applicants will be considered for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status. Employment is contingent upon successful completion of a background check and/or drug screening, as applicable. Applicant information will be handled in accordance with VLink's privacy policy.
Description:
Job Title: Palantir Data Engineer
Location: Dallas, TX - hybrid
About VLink: Started in 2006 and headquartered in Connecticut, VLink is one of the fastest
growing digital technology services and consulting companies. Since its inception, our innovative
team members have been solving the most complex business, and IT challenges of our global
clients.
Job Description:
The candidate should have experience with Palantir clients.
Data Integration & ETL: Build and manage scalable data pipelines to ingest data from diverse sources (ERP, CRM, APIs, S3, SQL databases) into Foundry.
Ontology Modeling: Define and maintain the "Ontology"βthe platformβs semantic layerβwhich maps technical data to real-world business objects (e.g., "Aircraft," "Customer," or "Invoice").
Pipeline Development: Write and optimize data transformations using PySpark, SQL, or Java within Foundry's Code Repositories.
Application Building: Develop front-end operational applications and interactive dashboards using low-code/pro-code tools like Workshop and Slate.
AIP Integration: Implement Artificial Intelligence Platform (AIP) features, such as LLM-backed functions and agents, to automate workflows.
Data Governance & Security: Configure granular access controls, data health monitors, and lineage tracking to ensure compliance and reliability.
Core Technical Skills
Languages: High proficiency in Python (PySpark) and SQL is mandatory. Knowledge of Java, TypeScript, or JavaScript is often required for front-end customization.
Big Data: Understanding of distributed computing (Spark), data warehousing concepts, and schema design (Star, Snowflake, etc.).
DevOps: Experience with Git-based version control, CI/CD practices, and debugging complex data workflows.
Cloud Architecture: Familiarity with AWS, Azure, or GCP environments where Foundry is typically hosted.
Equal Employment Opportunity (EEO) Statement:
VLink is an equal opportunity employer committed to fostering an inclusive environment where diversity is celebrated. All qualified applicants will be considered for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status. Employment is contingent upon successful completion of a background check and/or drug screening, as applicable. Applicant information will be handled in accordance with VLink's privacy policy.






