Enterprise Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Enterprise Data Architect in Reston, VA, with a long-term contract. Key skills include 5+ years in software/data engineering, proficiency in Python, PySpark, SQL, and experience with Palantir Foundry. A Bachelor's degree is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
800
-
πŸ—“οΈ - Date discovered
August 22, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Reston, VA
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Spark (Apache Spark) #AWS (Amazon Web Services) #Agile #Data Pipeline #Model Deployment #SQL (Structured Query Language) #Security #GitLab #Data Engineering #"ETL (Extract #Transform #Load)" #Deployment #ML (Machine Learning) #Spark SQL #Data Integration #PySpark #Jira #Palantir Foundry #Data Architecture #Compliance #Data Lake #Data Governance #Cloud #Python
Role description
Position: Enterprise Data Architect Location: Reston, VA - ONSITE Duration: Long Term Description: β€’ Lead the end-to-end implementation of Palantir Foundry applications, including ontology modeling, pipeline development, and operational dashboards. β€’ Collaborate with internal stakeholders and Palantir engineers to define MVPs, red-flag logic, and iterative delivery milestones β€’ Build and optimize data pipelines using PySpark, Python, and Foundry Code Workbooks to ingest and transform structured and unstructured data β€’ Integrate Foundry with enterprise systems such as AWS Data Lake, and external APIs (e.g., LexisNexis, TREPP) β€’ Develop and maintain secure data exchange mechanisms (e.g., SFTP PUSH/PULL, SSH endpoints) for high-volume document transfers β€’ Support Foundry GenAI chatbot capabilities and real-time analytics for fraud investigations and compliance workflows Required Qualifications β€’ 5+ years of experience in software engineering or data engineering roles, with at least 2 years working directly with Palantir Foundry. β€’ Proficiency in Python, PySpark, SQL, and experience with Foundry Pipelines, Ontology, and Code Workbooks. β€’ Strong understanding of data integration patterns, security protocols (SSO, TLS, Zscaler), and field-level encryption β€’ Experience with cloud platforms (AWS preferred) and enterprise data governance practices. β€’ Excellent communication skills and ability to work cross-functionally with technical and non-technical teams. β€’ Bachelor's degree Preferred Qualifications β€’ Experience with Palantir AIP and Apollo for AI/ML model deployment etc. β€’ Familiarity with fraud detection, risk modeling, or financial services data domains. β€’ Prior experience in Agile environments with tools like Jira, Confluence, and GitLab. β€’ Foundry Certification Exam Guide - Data Engineer, Application Developer