SPECTRAFORCE

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer focused on Palantir Foundry and PySpark, offering an 8-month remote contract in San Francisco, CA. Key skills include Python, SQL, and data validation frameworks, with experience in compliance reporting required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 11, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Francisco Bay Area
-
🧠 - Skills detailed
#PySpark #Data Quality #Data Engineering #Palantir Foundry #Spark SQL #Compliance #Spark (Apache Spark) #Data Lineage #Data Accuracy #Datasets #"ETL (Extract #Transform #Load)" #Scala #Data Transformations #Python #SQL (Structured Query Language) #Data Pipeline
Role description
Job Title: Data Engineer (Palantir Foundry AND PySpark) Location: San Francisco, CA (100% Remote) Duration: 8 Months Contract About the Role We are seeking a skilled Data Engineer to support compliance reporting initiatives by building scalable, auditable, and high-quality data pipelines. This role focuses on developing Palantir Foundry-native solutions, implementing robust data validation frameworks, and ensuring regulatory-grade data accuracy and traceability. You will work closely with compliance stakeholders and data teams to deliver reliable datasets that meet audit and reporting requirements. Key Responsibilities • Design, build, and maintain Palantir Foundry pipelines for compliance reporting • Develop data validation and data quality frameworks within Foundry • Implement large-scale data transformations using PySpark and Spark SQL • Build compliance-driven checks (data completeness, consistency, reconciliation, exception handling) • Ensure data lineage, auditability, and reproducibility across all pipelines • Collaborate with compliance SMEs to validate reporting logic and outputs • Monitor pipeline performance, troubleshoot issues, and support data quality remediation Required Qualifications • Strong experience with Python • Hands-on expertise in PySpark / Spark SQL • Advanced proficiency in SQL • Proven experience working with Palantir Foundry • Experience building data validation / data quality frameworks • Understanding of compliance, audit, or regulatory reporting requirements • Strong problem-solving and analytical skills Preferred Qualifications • Experience with OEIS or regulatory compliance reporting • Exposure to Foundry Workshop • Experience with ontology-driven or curated datasets • Background in utilities, infrastructure, or asset-based industries Why Join Us? • Work on high-impact compliance and regulatory data systems • Fully remote flexibility • Opportunity to work with modern data platforms like Palantir Foundry • Collaborative, cross-functional environment