Skip to main content Skip to main navigation

Publikation

Explainable Biomedical Claim Verification with Large Language Models

Siting Liang; Daniel Sonntag
In: Joint Proceedings of the ACM IUI Workshops 2025. International Conference on Intelligent User Interfaces (IUI-2025), ACM IUI Workshops 2025, located at IUI-2025, March 24-27, Cagliari, Italy, Joint Proceedings of the ACM IUI Workshops 2025, 3/2025.

Zusammenfassung

Verification of biomedical claims is critical for healthcare decision-making, public health policy and scientific research. We present an interactive biomedical claim verification system by integrating LLMs, transparent model explanations, and user-guided justification. In the system, users first retrieve relevant scientific studies from a persistent medical literature corpus and explore how different LLMs perform natural language inference (NLI) within task-adaptive reasoning framework to classify each study as "Support," "Contradict," or "Not Enough Information" regarding the claim. Users can examine the model's reasoning process with additional insights provided by SHAP values that highlight word-level contributions to the final result. This combination enables a more transparent and interpretable evaluation of the model's decision-making process. A summary stage allows users to consolidate the results by selecting a result with narrative justification generated by LLMs. As a result, a consensus-based final decision is summarized for each retrieved study, aiming safe and accountable AI-assisted decision-making in biomedical contexts. We aim to integrate this explainable verification system as a component within a broader evidence synthesis framework to support human-AI collaboration.

Projekte

Weitere Links