| |

Brown Bag Lunch Series: LLM Reasoning in Healthcare: Faithful Explanations or Plausible Rationalizations?

Achieving trustworthy AI in healthcare remains a significant challenge, as existing eXplanation Artificial Intelligence (XAI) methods like attention maps or LIME/SHAP are often dismissed by clinicians and patients as uninterpretable. While the reasoning LLMs initially seemed promising for generating natural-language explanations, they often create unfaithful rationalizations that obscure a model’s true logic. To solve this, we propose shifting from a model-centric to a user-centric paradigm. We are developing a multimodal XAI system that is intrinsically constrained by medical knowledge. This design changes the role of the LLM from an unreliable generator of plausible explanations to a “faithful translator” of the computational process. This approach guarantees trustworthiness by ensuring the explanation and the computation are identical. The system is designed to be iteratively fine-tuned by end-users, resulting in a truly user-centric and reliable XAI solution.

Similar Posts