

Senior Data Engineer - Palantir
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 5-10 years of experience, focusing on AWS, big data technologies, and Palantir Foundry. It is a full-time, on-site contract position lasting over 6 months, requiring strong skills in Python, ETL workflows, and DevOps practices.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
August 10, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Jersey City, NJ 07302
-
π§ - Skills detailed
#BI (Business Intelligence) #Python #PySpark #Agile #Data Integration #Microservices #Lambda (AWS Lambda) #Snowflake #Data Engineering #Data Pipeline #AWS S3 (Amazon Simple Storage Service) #Scala #Jenkins #Palantir Foundry #Big Data #GIT #Data Science #Apache Spark #Spark (Apache Spark) #TensorFlow #Athena #ML (Machine Learning) #Docker #Redshift #Kafka (Apache Kafka) #AI (Artificial Intelligence) #PyTorch #"ETL (Extract #Transform #Load)" #S3 (Amazon Simple Storage Service) #Cloud #Data Processing #DevOps #AWS (Amazon Web Services) #Kubernetes
Role description
Job OverviewWe are seeking a Senior Data Engineer with deep technical expertise in designing and implementing scalable data pipelines, cloud-native architectures, and data integration strategies. The ideal candidate will have strong hands-on experience with AWS, big data technologies, and a proven background in Palantir Foundry, or be certified in it. As a Sr. Data Engineer, you will play a critical role in building, optimizing, and maintaining data solutions that empower analytics, machine learning, and business intelligence across the organization.
Duties
Design, build, and optimize ETL/ELT workflows and data pipelines for large-scale data processing.
Work with structured and unstructured data using technologies like Apache Spark, Kafka, and Snowflake.
Develop and maintain cloud-based data solutions using AWS services including S3, Glue, Athena, Lambda, and Redshift.
Collaborate with data scientists and ML engineers to integrate AI/ML models into production pipelines.
Lead the adoption and implementation of Palantir Foundry, building pipelines and data models within the platform.
Contribute to microservices-based architecture and ensure system reliability, scalability, and performance.
Apply DevOps best practices, including CI/CD pipelines, containerization with Docker, and orchestration using Kubernetes.
Engage in agile ceremonies and contribute to sprint planning, reviews, and retrospectives.
Qualifications
5β10 years of experience in data engineering roles with strong emphasis on pipeline development and data processing.
Proficient in Python and PySpark.
Deep hands-on experience with AWS (S3, Glue, Redshift, Lambda, Athena).
Expertise in big data tools such as Apache Spark, Kafka, and Snowflake.
Familiarity with Palantir Foundry is highly preferred (certification is a strong plus).
Strong understanding of distributed systems and microservices architecture.
Exposure to AI/ML frameworks like TensorFlow or PyTorch, and their application in data engineering.
Experience in DevOps/CI-CD, with knowledge of tools like Jenkins, Git, Docker, and Kubernetes.
Excellent problem-solving skills and the ability to work in fast-paced, agile environments.
Job Types: Full-time, Contract
Work Location: In person
Job OverviewWe are seeking a Senior Data Engineer with deep technical expertise in designing and implementing scalable data pipelines, cloud-native architectures, and data integration strategies. The ideal candidate will have strong hands-on experience with AWS, big data technologies, and a proven background in Palantir Foundry, or be certified in it. As a Sr. Data Engineer, you will play a critical role in building, optimizing, and maintaining data solutions that empower analytics, machine learning, and business intelligence across the organization.
Duties
Design, build, and optimize ETL/ELT workflows and data pipelines for large-scale data processing.
Work with structured and unstructured data using technologies like Apache Spark, Kafka, and Snowflake.
Develop and maintain cloud-based data solutions using AWS services including S3, Glue, Athena, Lambda, and Redshift.
Collaborate with data scientists and ML engineers to integrate AI/ML models into production pipelines.
Lead the adoption and implementation of Palantir Foundry, building pipelines and data models within the platform.
Contribute to microservices-based architecture and ensure system reliability, scalability, and performance.
Apply DevOps best practices, including CI/CD pipelines, containerization with Docker, and orchestration using Kubernetes.
Engage in agile ceremonies and contribute to sprint planning, reviews, and retrospectives.
Qualifications
5β10 years of experience in data engineering roles with strong emphasis on pipeline development and data processing.
Proficient in Python and PySpark.
Deep hands-on experience with AWS (S3, Glue, Redshift, Lambda, Athena).
Expertise in big data tools such as Apache Spark, Kafka, and Snowflake.
Familiarity with Palantir Foundry is highly preferred (certification is a strong plus).
Strong understanding of distributed systems and microservices architecture.
Exposure to AI/ML frameworks like TensorFlow or PyTorch, and their application in data engineering.
Experience in DevOps/CI-CD, with knowledge of tools like Jenkins, Git, Docker, and Kubernetes.
Excellent problem-solving skills and the ability to work in fast-paced, agile environments.
Job Types: Full-time, Contract
Work Location: In person