

Haystack
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of over 6 months, offering £50,000 - £65,000. Key skills include Palantir Foundry, Python, PySpark, SQL, and experience in multi-cloud environments. Remote work is available.
🌎 - Country
United States
💱 - Currency
£ GBP
-
💰 - Day rate
295
-
🗓️ - Date
April 18, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Deployment #Data Pipeline #Base #Scripting #Data Modeling #Azure #AWS (Amazon Web Services) #Data Integrity #Datasets #PySpark #Scala #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #Monitoring #Pandas #Data Migration #Python #Cloud #Data Engineering #Palantir Foundry #ML (Machine Learning) #Migration #Programming #SQL (Structured Query Language) #Data Access #Spark (Apache Spark)
Role description
Data Engineer | £50,000 - £65,000
We're working with a leading UK-wide infrastructure and enterprise facilities management powerhouse on this exciting opportunity.
Join a dynamic data squad and take full ownership of end-to-end data pipelines within a sophisticated Palantir Foundry ecosystem. You will be at the heart of transforming complex datasets using Python, PySpark, and SQL to drive high-stakes decision-making across a massive enterprise landscape.
The Role
• Develop high-performance data pipelines and seamless transformations using Python (Pandas) and PySpark within the Palantir Foundry environment.
• Design and implement scalable, reusable data workflows that curate enterprise-level data for advanced analytics and machine learning use cases.
• Lead collaborative workshops with stakeholders to capture requirements and translate them into robust ontology designs and data solutions.
• Manage day-to-day BAU tasks maintaining existing ETL processes while contributing to strategic roadmap projects and technology migrations.
• Ensure data integrity and performance by monitoring pipelines and optimizing processes to meet stringent enterprise SLAs.
What You'll Need
• Proven expertise in Palantir Foundry development, including extensive use of Code Workbooks, Pipeline Builder, and Object Explorer.
• At least 5 years of advanced programming experience specializing in Python, PySpark, and complex SQL scripting.
• Strong background in data modeling and ontology development to optimize data accessibility for reporting and AI workflows.
• Practical experience in automating ETL processes and working within CI/CD frameworks for seamless code deployment.
• Experience navigating multi-cloud environments (Azure/AWS) and managing data migrations from legacy systems into modern platforms.
What's On Offer
• £50,000 - £65,000 base salary + £5,200 annual car allowance.
• Performance-linked bonus scheme (5% - 15%) and comprehensive pension plan.
• Fully remote working model with all high-end equipment provided.
• Significant career progression opportunities within a major market-leading organization.
Apply via Haystack today!
Data Engineer | £50,000 - £65,000
We're working with a leading UK-wide infrastructure and enterprise facilities management powerhouse on this exciting opportunity.
Join a dynamic data squad and take full ownership of end-to-end data pipelines within a sophisticated Palantir Foundry ecosystem. You will be at the heart of transforming complex datasets using Python, PySpark, and SQL to drive high-stakes decision-making across a massive enterprise landscape.
The Role
• Develop high-performance data pipelines and seamless transformations using Python (Pandas) and PySpark within the Palantir Foundry environment.
• Design and implement scalable, reusable data workflows that curate enterprise-level data for advanced analytics and machine learning use cases.
• Lead collaborative workshops with stakeholders to capture requirements and translate them into robust ontology designs and data solutions.
• Manage day-to-day BAU tasks maintaining existing ETL processes while contributing to strategic roadmap projects and technology migrations.
• Ensure data integrity and performance by monitoring pipelines and optimizing processes to meet stringent enterprise SLAs.
What You'll Need
• Proven expertise in Palantir Foundry development, including extensive use of Code Workbooks, Pipeline Builder, and Object Explorer.
• At least 5 years of advanced programming experience specializing in Python, PySpark, and complex SQL scripting.
• Strong background in data modeling and ontology development to optimize data accessibility for reporting and AI workflows.
• Practical experience in automating ETL processes and working within CI/CD frameworks for seamless code deployment.
• Experience navigating multi-cloud environments (Azure/AWS) and managing data migrations from legacy systems into modern platforms.
What's On Offer
• £50,000 - £65,000 base salary + £5,200 annual car allowance.
• Performance-linked bonus scheme (5% - 15%) and comprehensive pension plan.
• Fully remote working model with all high-end equipment provided.
• Significant career progression opportunities within a major market-leading organization.
Apply via Haystack today!



