Palantir Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Palantir Data Engineer in Jersey City, NJ, hybrid, with a full-time contract lasting over 6 months. Key skills include AWS, ETL workflows, microservices, AI/ML frameworks, and big data technologies. No sponsorship available.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 19, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Palantir Foundry #Snowflake #Data Pipeline #Spark (Apache Spark) #AI (Artificial Intelligence) #Programming #Kubernetes #S3 (Amazon Simple Storage Service) #Data Engineering #Apache Spark #AWS (Amazon Web Services) #Docker #TensorFlow #PyTorch #Redshift #Storage #Data Processing #Agile #Scala #Athena #ML (Machine Learning) #DevOps #Cloud #Python #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #Big Data #Microservices #Lambda (AWS Lambda) #PySpark
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Infojini, is seeking the following. Apply via Dice today! Job Title : Palantir Data Engineer Location : Jersey City, NJ Hybrid Type: Full Time Only (NO Contract; C2C/W2/C2H/1099) Work Authorization: This role is not eligible for company sponsorship now or in future We are seeking a seasoned Data Engineer with experience in data engineering, AWS, microservices architecture, and AI/ML frameworks. The ideal candidate will work on designing scalable data solutions, building robust pipelines, and integrating machine learning models into data workflows. Key Requirements: β€’ Hands-on experience in data engineering with a focus on ETL workflows, data pipelines, and cloud computing. β€’ Strong experience with AWS services for data processing and storage (e.g., S3, Glue, Athena, Lambda, Redshift). β€’ Proficiency in programming languages such as Python, PySpark β€’ Deep understanding of microservices architecture and distributed systems. β€’ Familiarity with AI/ML tools and frameworks (e.g., TensorFlow, PyTorch) and their integration into data pipelines. β€’ Experience with big data technologies like Apache Spark, Kafka, and Snowflake. β€’ Strong problem-solving and performance optimization skills. β€’ Exposure to modern DevOps practices, including CI/CD pipelines and container orchestration tools like Docker and Kubernetes. β€’ Experience working in agile environments delivering complex data engineering solutions. β€’ Proven expertise or certification in Palantir Foundry is a plus.