ZonForce Technology

Pharma Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Pharma Data Architect, fully remote, lasting 12+ months on W2, with a pay rate of "unknown." Key skills include Neo4j, Python, PySpark, and experience in Market Access or Patient Services data.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 27, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Python #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Data Architecture #Impala #AI (Artificial Intelligence) #Logging #DynamoDB #Compliance #Monitoring #Data Lake #Data Quality #Lambda (AWS Lambda) #Indexing #Data Layers #Knowledge Graph #NoSQL #Big Data #Neo4J #Datasets #Storage #Metadata #Data Modeling #Agile #Data Governance #Data Engineering #PySpark #Data Privacy #Spark (Apache Spark) #Strategy #Cloud #Data Integration #Data Strategy #API (Application Programming Interface)
Role description
This role is open only for W2 candidates - NO C2C. Job Title: Data Architect - Pharma / Healthcare Location: Fully Remote Duration: 12+ months on W2 Job Description Looking for a Data Architect or Sr. Data Engineer with experience in Patient Access or Market Access Data strategyKnowledge Graph Architecture & Development • Architect, design, and build Neo4j-based knowledge graph structures supporting Market Access and Patient Services use cases. • Ingest, model, and connect complex pharma datasets including patients, coverage, contracts, benefits, services, and gross-to-net (GTN) components. • Design and optimize graph schemas, nodes, relationships, metadata layers, indexing strategies, and query performance. • Ensure graph data is accurate, traceable, and aligned with enterprise data governance and compliance standards. Data Engineering & Integration • Coordinate and implement Specialty Pharmacy and Market Access data integration solutions in partnership with Commercial Data Lake teams. • Develop ETL/ELT pipelines using PySpark and Python to ingest, transform, aggregate, and orchestrate data for end-user consumption. • Build competency across Market Access and Patient Services datasets, including: • Rebate data • EDI sales and chargeback data • Master data • Copay and affordability data • Medical and prescription claims • Care model and patient services data • Apply best practices for data quality monitoring, validation, and reporting. • Leverage big data tools and architectures (e.g., Spark, Hive, Impala, cloud data platforms) to answer critical business questions. AI / LLM Integration • Design and integrate LLM-powered chatbot and assistant workflows on top of the knowledge graph. • Implement prompt engineering, retrieval-augmented generation (RAG), and domain-specific grounding using graph and document sources. • Ensure AI components follow enterprise standards for explainability, auditability, and compliance. • Collaborate with enterprise AI teams to align with approved frameworks, guardrails, and tooling. Backend Services, Metadata & Logging • Build backend services that interface with the knowledge graph, LLM systems, and field-facing applications. • Implement robust metadata, logging, and monitoring frameworks to support auditability and regulated environments. • Utilize cloud-native services (e.g., object storage, NoSQL stores, serverless compute, APIs) where appropriate. Agile Delivery & Stakeholder Collaboration • Deliver at high velocity in an agile, iterative environment with visible daily progress. • Participate in standups, technical design reviews, and sprint planning. • Own work end-to-end: design, implementation, testing, and validation. • Proactively identify data gaps, design risks, and integration issues before they become blockers. • Establish and maintain strong working relationships across technical teams and external partners, managing expectations and communication effectively. Required Skills & Experience • Expert-level experience with: • Data architecture and data modeling • Graph modeling and Neo4j • Python and PySpark • AI/LLMs and chatbot architectures • Cloud platforms (AWS preferred) • Strong hands-on experience integrating complex enterprise datasets. • Proven ability to work independently with minimal direction. • Experience delivering MVPs or prototypes under tight timelines. • Experience working in highly regulated environments (data privacy, audit, compliance). Preferred / Nice-to-Have Qualifications • Prior experience with Market Access, Patient Services, or Specialty Pharmacy data • Experience contributing to enterprise LLM pilots or production AI solutions • Familiarity with AWS services such as DynamoDB, Lambda, S3, API Gateway • Experience designing AI-driven insights layers on top of enterprise data platforms