

Lead Palantir Data Engineer (15+ Exp)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Palantir Data Engineer with 15+ years of experience, offering a remote contract. Key skills include Palantir Foundry, PySpark, and AWS. Proficiency in SQL, Python, and Linux is required, along with experience in data pipelines and cross-functional collaboration.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Jenkins #SQL (Structured Query Language) #GitLab #Data Pipeline #GIT #Monitoring #Palantir Foundry #BitBucket #AWS (Amazon Web Services) #Spark (Apache Spark) #Automation #Python #Scala #Linux #PySpark #Data Engineering
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Lead Palantir Data Engineer (15+ Exp)
Location: Remote
W2 / C2C
Skills: Palantir Foundry, Pyspark, AWS
Responsibilities
β’ Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation.
β’ Collaborate with product and technology teams to design and validate the capabilities of the data platform
β’ Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability
β’ Provide technical support and usage guidance to the users of our platformβs services.
β’ Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services.
Qualifications
β’ Experience building and optimizing data pipelines in a distributed environment
β’ Experience supporting and working with cross-functional teams
β’ Proficiency working in Linux environment
β’ 5+ years of advanced working knowledge of SQL, Python, and PySpark
β’ Knowledge on Palantir
β’ Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline
β’ Experience with platform monitoring and alerts tools