

Data Engineer (Palantir)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Palantir) in Jersey City, NJ (Hybrid), with a contract duration of over 6 months and a pay rate of "TBD." Candidates should have 3–10 years of experience, strong AWS skills, and familiarity with Palantir Foundry.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
-
🗓️ - Date discovered
August 8, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Debugging #Automation #"ETL (Extract #Transform #Load)" #GIT #Lambda (AWS Lambda) #Palantir Foundry #Scala #Big Data #Cloud #Kafka (Apache Kafka) #Python #Data Management #Batch #Compliance #Data Pipeline #Kubernetes #Agile #S3 (Amazon Simple Storage Service) #PySpark #DevOps #Apache Spark #ML (Machine Learning) #Athena #Docker #Data Engineering #Redshift #Microservices #Snowflake #Spark (Apache Spark) #PyTorch #AI (Artificial Intelligence) #AWS (Amazon Web Services) #TensorFlow #Data Processing #Data Governance #Metadata
Role description
Location: Jersey City, NJ (Hybrid) Experience Level: 3–10 Years Employment Type: Full-Time / Contract
Position Overview:
We are seeking a dynamic and experienced Data Engineer to support the development and integration of enterprise-scale data workflows, with a strong emphasis on AWS, microservices, and AI/ML frameworks. The ideal candidate will have hands-on experience with Palantir Foundry or a strong desire to work with the platform, and will contribute to designing scalable data pipelines, managing complex integrations, and enabling machine learning in a production environment.
Key Responsibilities:
Design, build, and optimize ETL workflows and data pipelines to handle large-scale, structured and unstructured data
Integrate machine learning models into data flows and production environments
Develop cloud-native solutions using AWS services like S3, Glue, Athena, Lambda, and Redshift
Collaborate across engineering, DevOps, and analytics teams to deploy robust and reusable components
Apply microservices architecture principles to build modular and scalable solutions
Leverage big data technologies such as Apache Spark, Kafka, and Snowflake for high-performance data processing
Contribute to containerized environments using Docker and orchestration platforms like Kubernetes
Follow Agile development practices, sprint cycles, and DevOps-driven workflows
Document and share best practices in code, architecture, and performance tuning
Required Skills & Qualifications
3–10 years of experience in data engineering, ETL, and cloud-based data solutions
Strong proficiency in Python and PySpark
Hands-on experience with AWS data services (Glue, S3, Redshift, Lambda, Athena, etc.)
Deep understanding of microservices and distributed systems
Familiarity with AI/ML workflows and tools such as TensorFlow or PyTorch
Experience with Apache Spark, Kafka, or Snowflake for real-time/batch processing
Exposure to CI/CD, Git, and infrastructure automation pipelines
Strong debugging, optimization, and performance tuning skills
Excellent verbal and written communication skills
Agile mindset with ability to work across cross-functional teams
Nice-to-Have
Prior hands-on experience or certification in Palantir Foundry
Experience building workflows using Palantir’s Code Workbooks or Object Builders
Familiarity with data governance, lineage, and metadata management tools
Understanding of compliance and privacy standards in data engineering (e.g., HIPAA, SOC2)
How to Apply
Submit your resume and include highlights of your AWS, big data, and AI/ML integration experience. Please mention any prior experience with Palantir Foundry if applicable.
Location: Jersey City, NJ (Hybrid) Experience Level: 3–10 Years Employment Type: Full-Time / Contract
Position Overview:
We are seeking a dynamic and experienced Data Engineer to support the development and integration of enterprise-scale data workflows, with a strong emphasis on AWS, microservices, and AI/ML frameworks. The ideal candidate will have hands-on experience with Palantir Foundry or a strong desire to work with the platform, and will contribute to designing scalable data pipelines, managing complex integrations, and enabling machine learning in a production environment.
Key Responsibilities:
Design, build, and optimize ETL workflows and data pipelines to handle large-scale, structured and unstructured data
Integrate machine learning models into data flows and production environments
Develop cloud-native solutions using AWS services like S3, Glue, Athena, Lambda, and Redshift
Collaborate across engineering, DevOps, and analytics teams to deploy robust and reusable components
Apply microservices architecture principles to build modular and scalable solutions
Leverage big data technologies such as Apache Spark, Kafka, and Snowflake for high-performance data processing
Contribute to containerized environments using Docker and orchestration platforms like Kubernetes
Follow Agile development practices, sprint cycles, and DevOps-driven workflows
Document and share best practices in code, architecture, and performance tuning
Required Skills & Qualifications
3–10 years of experience in data engineering, ETL, and cloud-based data solutions
Strong proficiency in Python and PySpark
Hands-on experience with AWS data services (Glue, S3, Redshift, Lambda, Athena, etc.)
Deep understanding of microservices and distributed systems
Familiarity with AI/ML workflows and tools such as TensorFlow or PyTorch
Experience with Apache Spark, Kafka, or Snowflake for real-time/batch processing
Exposure to CI/CD, Git, and infrastructure automation pipelines
Strong debugging, optimization, and performance tuning skills
Excellent verbal and written communication skills
Agile mindset with ability to work across cross-functional teams
Nice-to-Have
Prior hands-on experience or certification in Palantir Foundry
Experience building workflows using Palantir’s Code Workbooks or Object Builders
Familiarity with data governance, lineage, and metadata management tools
Understanding of compliance and privacy standards in data engineering (e.g., HIPAA, SOC2)
How to Apply
Submit your resume and include highlights of your AWS, big data, and AI/ML integration experience. Please mention any prior experience with Palantir Foundry if applicable.