CloudHive

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 3-month contract, paying $80-$90 per hour, remote. Key skills include data pipeline design, automation, and proficiency in Python, Java, or SQL, with experience in Palantir Foundry and data integration.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
720
-
πŸ—“οΈ - Date
March 11, 2026
πŸ•’ - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Python #Data Engineering #Palantir Foundry #Data Pipeline #Data Integration #Programming #"ETL (Extract #Transform #Load)" #Automation #SQL (Structured Query Language) #Java #Data Processing #Computer Science #Scala
Role description
Palantir Data Engineer - Palantir Foundry, Ontology, AIP Location: Remote Contract: 3 months initial Rate: $80-$90 per hour We are seeking a highly skilled Data Engineer to join our client's team. In this role, you will be responsible for designing, implementing, and optimizing data pipelines, automations, and document ingestion processes to enhance our data integration capabilities. You will work closely with cross-functional teams to ensure seamless data flow and support our clients' analytical needs. Key Responsibilities: β€’ Data Pipelines: Design and optimize robust data pipelines that facilitate efficient data processing and analysis. β€’ Functions: Develop scalable functions for data transformation and analysis, ensuring optimal performance within the Palantir platform. β€’ Automations: Create automated workflows to streamline data processing tasks, enhancing operational efficiency and reducing manual intervention. β€’ AIP Logic: Integrate and optimize AIP (Application Integration Platform) logic to improve data processing capabilities. β€’ Ontology Objects: Work with ontology objects to establish a structured framework for data relationships and definitions. β€’ Document Ingestion: Manage the document ingestion process to ensure accurate data capture and transformation from various sources. β€’ Connectors: Build and maintain connectors for seamless integration with external data sources and systems. Qualifications: β€’ Bachelor’s degree in Computer Science, Engineering, or a related field. β€’ Proven experience in data engineering, with a focus on data pipeline design and implementation. β€’ Proficient in programming languages such as Python, Java, or SQL. β€’ Familiarity with data integration tools and platforms, as well as experience in automating workflows. β€’ Strong problem-solving skills and the ability to work collaboratively within a team. β€’ Excellent communication skills, with the ability to explain complex concepts to both technical and non-technical stakeholders.