

Tech Observer
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." It requires expertise in Palantir Foundry, Snowflake, ETL/ELT pipelines, data modeling, Python, and distributed processing frameworks.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
536
-
🗓️ - Date
February 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Middletown, NJ
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #Automation #Visualization #Data Pipeline #Batch #"ETL (Extract #Transform #Load)" #Palantir Foundry #NoSQL #GraphQL #AI (Artificial Intelligence) #Snowflake #Scala #GCP (Google Cloud Platform) #Clustering #dbt (data build tool) #Tableau #Classification #ML (Machine Learning) #SQL (Structured Query Language) #Databases #BI (Business Intelligence) #Schema Design #Microsoft Power BI #AWS (Amazon Web Services) #Data Processing #Data Ingestion #Data Modeling #Spark (Apache Spark) #Looker #Data Engineering #Python #Azure #Cloud #Airflow #Data Quality
Role description
Profile:
Strong platform-focused data engineer with deep hands-on Palantir and Snowflake experience who can both build high-performance data systems and actively analyze data to uncover patterns, trends, and actionable business insights (not just pipeline maintenance).
Must have experience in:
• Palantir Foundry (data ingestion, transformations, ontology modeling, analytics workflows)
• Snowflake (advanced SQL, performance tuning, analytical schema design)
• Building scalable ETL/ELT data pipelines (batch + near real-time)
• Data modeling for analytics and reporting layers
• Working with relational and NoSQL databases
• Python for data processing and automation
• Data quality, governance, and lineage practices
• Unsupervised clustering and classification of structured data in DBMBS/files and unstructured data
• Create multi-stage analysis/transformation pipelines like Contour in Palantir Foundry
• Distributed data processing frameworks like Spark
• Querying using SparkQL, GraphQL, etc
• Create apps, reports, dashboards like those in Palantir Foundry
Good to have:
• dbt, Airflow, Spark or similar orchestration/processing frameworks
• BI & visualization tools (Tableau, Power BI, Looker, etc.)
• Streaming data platforms (Kafka/Kinesis)
• Cloud platforms (AWS/Azure/GCP)
• ML feature engineering or analytics support
• Agentic AI skills
Profile:
Strong platform-focused data engineer with deep hands-on Palantir and Snowflake experience who can both build high-performance data systems and actively analyze data to uncover patterns, trends, and actionable business insights (not just pipeline maintenance).
Must have experience in:
• Palantir Foundry (data ingestion, transformations, ontology modeling, analytics workflows)
• Snowflake (advanced SQL, performance tuning, analytical schema design)
• Building scalable ETL/ELT data pipelines (batch + near real-time)
• Data modeling for analytics and reporting layers
• Working with relational and NoSQL databases
• Python for data processing and automation
• Data quality, governance, and lineage practices
• Unsupervised clustering and classification of structured data in DBMBS/files and unstructured data
• Create multi-stage analysis/transformation pipelines like Contour in Palantir Foundry
• Distributed data processing frameworks like Spark
• Querying using SparkQL, GraphQL, etc
• Create apps, reports, dashboards like those in Palantir Foundry
Good to have:
• dbt, Airflow, Spark or similar orchestration/processing frameworks
• BI & visualization tools (Tableau, Power BI, Looker, etc.)
• Streaming data platforms (Kafka/Kinesis)
• Cloud platforms (AWS/Azure/GCP)
• ML feature engineering or analytics support
• Agentic AI skills






