

Haystack
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in Palantir Foundry, offering £400 - £500 for an initial 3-month contract. Key skills include Python, PySpark, and cloud experience (AWS, Azure, GCP). Remote work with occasional client site travel required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
500
-
🗓️ - Date
March 27, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #Palantir Foundry #Azure #Scala #Data Governance #Metadata #AWS (Amazon Web Services) #Cloud #Schema Design #Data Management #PySpark #Spark (Apache Spark) #GCP (Google Cloud Platform) #Data Processing #GIT #Automation #Data Modeling #Semantic Models #Version Control #Data Engineering #Data Science #Spark SQL #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language)
Role description
Palantir Foundry Data Engineer X2 | £400 - £500
We're working with a leading digital transformation consultancy delivering high-impact data infrastructure on this exciting opportunity.
Are you a Palantir Foundry specialist looking for your next challenge? We are scaling a high-performing engineering team to build production-grade data foundations using Python, PySpark, and Spark SQL within the Foundry ecosystem. This is a high-impact, remote-friendly contract where you will design scalable semantic models and data products that drive real-world operational decision-making.
The Role
• Design, build, and optimize scalable data pipelines and complex semantic models directly within the Palantir Foundry platform.
• Collaborate with Data Scientists and Product teams to translate business requirements into robust, production-ready data foundations.
• Engineer high-performance data products using PySpark and Spark SQL to support advanced analytics and automation.
• Implement best-in-class metadata management and data governance practices to ensure long-term platform sustainability.
• Drive reliability and performance by integrating CI/CD pipelines and Git-based workflows into the data engineering lifecycle.
What You'll Need
• Proven, hands-on experience as a Data Engineer specifically within a Palantir Foundry production environment.
• Deep technical proficiency in Python and Spark (PySpark/Spark SQL) for large-scale data processing.
• Strong background delivering production-grade pipelines within major cloud ecosystems such as AWS, Azure, or GCP.
• Solid understanding of software engineering best practices, including version control (Git) and modular code design.
• Expert knowledge of data modeling, schema design, and architecting data for downstream consumption.
What's On Offer
• Competitive day rate of £400 - £500 (Inside IR35) for an initial 3-month engagement.
• Remote-first working environment with only occasional travel required to client sites.
• The opportunity to work on high-profile, enterprise-level Palantir implementation projects.
• Collaborative culture working alongside some of the sharpest minds in data engineering and analytics.
Apply via Haystack today!
Palantir Foundry Data Engineer X2 | £400 - £500
We're working with a leading digital transformation consultancy delivering high-impact data infrastructure on this exciting opportunity.
Are you a Palantir Foundry specialist looking for your next challenge? We are scaling a high-performing engineering team to build production-grade data foundations using Python, PySpark, and Spark SQL within the Foundry ecosystem. This is a high-impact, remote-friendly contract where you will design scalable semantic models and data products that drive real-world operational decision-making.
The Role
• Design, build, and optimize scalable data pipelines and complex semantic models directly within the Palantir Foundry platform.
• Collaborate with Data Scientists and Product teams to translate business requirements into robust, production-ready data foundations.
• Engineer high-performance data products using PySpark and Spark SQL to support advanced analytics and automation.
• Implement best-in-class metadata management and data governance practices to ensure long-term platform sustainability.
• Drive reliability and performance by integrating CI/CD pipelines and Git-based workflows into the data engineering lifecycle.
What You'll Need
• Proven, hands-on experience as a Data Engineer specifically within a Palantir Foundry production environment.
• Deep technical proficiency in Python and Spark (PySpark/Spark SQL) for large-scale data processing.
• Strong background delivering production-grade pipelines within major cloud ecosystems such as AWS, Azure, or GCP.
• Solid understanding of software engineering best practices, including version control (Git) and modular code design.
• Expert knowledge of data modeling, schema design, and architecting data for downstream consumption.
What's On Offer
• Competitive day rate of £400 - £500 (Inside IR35) for an initial 3-month engagement.
• Remote-first working environment with only occasional travel required to client sites.
• The opportunity to work on high-profile, enterprise-level Palantir implementation projects.
• Collaborative culture working alongside some of the sharpest minds in data engineering and analytics.
Apply via Haystack today!



