

CloudIngest
Data Scientists – AI/ML Engineer / Palantir Foundry(12+ Experience )
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist – AI/ML Engineer with 12+ years of experience, focusing on Palantir Foundry. Contract length is unspecified, with a pay rate of "unknown." Key skills include Python, ML frameworks, NLP, and experience with Snowflake and ETL/ELT pipelines.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Databases #Schema Design #Data Science #Classification #Python #AWS (Amazon Web Services) #Clustering #"ETL (Extract #Transform #Load)" #Airflow #Model Deployment #Monitoring #AI (Artificial Intelligence) #Flask #Batch #NLP (Natural Language Processing) #Data Quality #Tableau #Visualization #Azure #Snowflake #Data Ingestion #NoSQL #Langchain #dbt (data build tool) #Deployment #Scala #Kubernetes #TensorFlow #ML (Machine Learning) #Data Modeling #Data Processing #SQL (Structured Query Language) #Docker #BI (Business Intelligence) #Microsoft Power BI #GCP (Google Cloud Platform) #Spark (Apache Spark) #Palantir Foundry #PyTorch #Cloud #Deep Learning #Kafka (Apache Kafka) #Looker #Automation #GraphQL #FastAPI #Data Pipeline #API (Application Programming Interface)
Role description
Data Scientists – AI/ML Engineer
Must-Have Skills:
AI/ML & Deep Learning:
• Strong experience with Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn)
• End-to-end model building: data prep, training, evaluation, deployment
• Experience with NLP, embeddings, transformer architectures, and LLM fine-tuning
GPT / LLM / Agentic AI:
• Fine-tuning or prompting GPT-based LLMs
• Experience building RAG systems (Retrieval-Augmented Generation)
• Knowledge of vector databases
• Understanding of agentic frameworks (CrewAI, LangChain agents, AutoGen, etc.)
• Developing multi-agent systems with tools, memory, and planning loops
• Airflow for workflow orchestration
• Docker for containerization
• Kubernetes for scalable deployment
• CI/CD for model deployment
• Model monitoring and drift detection
• API development (FastAPI, Flask)
• Experience with SQL and NoSQL data stores
Must have experience in:
• Palantir Foundry (data ingestion, transformations, ontology modeling, analytics workflows)
• Snowflake (advanced SQL, performance tuning, analytical schema design)
• Building scalable ETL/ELT data pipelines (batch, near real-time)
• Data modeling for analytics and reporting (layers)
• Working with relational and NoSQL databases
• Python for data processing and automation
• Data quality, governance, and lineage practices
• Unsupervised clustering and classification of structured data in DBMS/files and unstructured data
• Creating multi-stage analysis/transformation pipelines like Contour in Palantir Foundry
• Distributed data processing frameworks like Spark
• Querying using GraphQL, Scalding, etc.
• Create apps, reports, dashboards like those in Palantir Foundry
• Good to have:
• dbt, Airflow, Spark or similar orchestration/processing frameworks
• BI & visualization tools (Tableau, Power BI, Looker, etc.)
• Streaming data platforms (Kafka/Kinesis)
• Cloud platforms (AWS/Azure/GCP)
• ML feature engineering or analytics support
• Agentic AI skills
Data Scientists – AI/ML Engineer
Must-Have Skills:
AI/ML & Deep Learning:
• Strong experience with Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn)
• End-to-end model building: data prep, training, evaluation, deployment
• Experience with NLP, embeddings, transformer architectures, and LLM fine-tuning
GPT / LLM / Agentic AI:
• Fine-tuning or prompting GPT-based LLMs
• Experience building RAG systems (Retrieval-Augmented Generation)
• Knowledge of vector databases
• Understanding of agentic frameworks (CrewAI, LangChain agents, AutoGen, etc.)
• Developing multi-agent systems with tools, memory, and planning loops
• Airflow for workflow orchestration
• Docker for containerization
• Kubernetes for scalable deployment
• CI/CD for model deployment
• Model monitoring and drift detection
• API development (FastAPI, Flask)
• Experience with SQL and NoSQL data stores
Must have experience in:
• Palantir Foundry (data ingestion, transformations, ontology modeling, analytics workflows)
• Snowflake (advanced SQL, performance tuning, analytical schema design)
• Building scalable ETL/ELT data pipelines (batch, near real-time)
• Data modeling for analytics and reporting (layers)
• Working with relational and NoSQL databases
• Python for data processing and automation
• Data quality, governance, and lineage practices
• Unsupervised clustering and classification of structured data in DBMS/files and unstructured data
• Creating multi-stage analysis/transformation pipelines like Contour in Palantir Foundry
• Distributed data processing frameworks like Spark
• Querying using GraphQL, Scalding, etc.
• Create apps, reports, dashboards like those in Palantir Foundry
• Good to have:
• dbt, Airflow, Spark or similar orchestration/processing frameworks
• BI & visualization tools (Tableau, Power BI, Looker, etc.)
• Streaming data platforms (Kafka/Kinesis)
• Cloud platforms (AWS/Azure/GCP)
• ML feature engineering or analytics support
• Agentic AI skills





