

Data Engineer - Pyspark - Palantir
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in PySpark and Palantir, offering a 6-month contract at an inside IR35 rate. It requires 5+ years of experience, SC clearance, and expertise in data pipelines, SQL, and cloud technologies. Hybrid in London.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
May 28, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Inside IR35
-
π - Security clearance
Yes
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Data Engineering #Hadoop #PySpark #Spark SQL #Databases #Cloud #Data Science #Azure #Scrum #Data Management #Data Warehouse #Palantir Foundry #Data Lake #Computer Science #Consul #"ETL (Extract #Transform #Load)" #Data Pipeline #JavaScript #SQL (Structured Query Language) #AWS (Amazon Web Services) #Scala #Agile #Distributed Computing #Python #Kubernetes #HTML (Hypertext Markup Language) #Spark (Apache Spark)
Role description
Data Engineer - PySpark with Palantir
Hybrid - London 2-3 days onsite
6 months - Inside IR35
SC Cleared or SC Clearable
About the Role:
We are working with a leading consultancy who are looking for an experienced Data Engineer with strong expertise in PySpark, Python, and SQL. The ideal candidate will have hands-on experience with the Palantir Foundry platform and a passion for building scalable data solutions. You will play a key role in designing and delivering data pipelines and warehouses, working with cutting-edge cloud technologies and driving impactful projects within a fast-paced environment.
Key Responsibilities:
β’ Develop and maintain data stores and data warehouse solutions
β’ Design and implement data pipelines integrating diverse data sources into Azure Data Lake and Azure Databases
β’ Collaborate with Product Owners and Solution Architects to translate business requirements into technical solutions
β’ Lead design, estimation, and planning activities for projects and PoCs
β’ Deliver high-quality solutions independently and as part of an agile team
β’ Mentor and lead junior Data Engineers, fostering collaboration and knowledge sharing
Skills & Experience:
β’ Minimum 5 yearsβ experience in PySpark, Python, and SQL
β’ Proven experience with Palantir Foundry platform
β’ Strong background in enterprise data analytics and distributed computing frameworks (Spark/Hive/Hadoop preferred)
β’ Demonstrated ability to design end-to-end data management and transformation solutions
β’ Proficient in Spark SQL and familiar with cloud platforms such as Azure or AWS
β’ Experience with Scrum/Agile methodologies
β’ Knowledge of JavaScript/HTML/CSS and Kubernetes is a plus
β’ Bachelorβs degree or equivalent in Computer Science, Data Science, or related field
Data Engineer - PySpark with Palantir
Hybrid - London 2-3 days onsite
6 months - Inside IR35
SC Cleared or SC Clearable
About the Role:
We are working with a leading consultancy who are looking for an experienced Data Engineer with strong expertise in PySpark, Python, and SQL. The ideal candidate will have hands-on experience with the Palantir Foundry platform and a passion for building scalable data solutions. You will play a key role in designing and delivering data pipelines and warehouses, working with cutting-edge cloud technologies and driving impactful projects within a fast-paced environment.
Key Responsibilities:
β’ Develop and maintain data stores and data warehouse solutions
β’ Design and implement data pipelines integrating diverse data sources into Azure Data Lake and Azure Databases
β’ Collaborate with Product Owners and Solution Architects to translate business requirements into technical solutions
β’ Lead design, estimation, and planning activities for projects and PoCs
β’ Deliver high-quality solutions independently and as part of an agile team
β’ Mentor and lead junior Data Engineers, fostering collaboration and knowledge sharing
Skills & Experience:
β’ Minimum 5 yearsβ experience in PySpark, Python, and SQL
β’ Proven experience with Palantir Foundry platform
β’ Strong background in enterprise data analytics and distributed computing frameworks (Spark/Hive/Hadoop preferred)
β’ Demonstrated ability to design end-to-end data management and transformation solutions
β’ Proficient in Spark SQL and familiar with cloud platforms such as Azure or AWS
β’ Experience with Scrum/Agile methodologies
β’ Knowledge of JavaScript/HTML/CSS and Kubernetes is a plus
β’ Bachelorβs degree or equivalent in Computer Science, Data Science, or related field