Brown Bag Lunch Series: LLM Reasoning in Healthcare: Faithful Explanations or Plausible Rationalizations?
Achieving trustworthy AI in healthcare remains a significant challenge, as existing eXplanation Artificial Intelligence (XAI) methods like attention maps or LIME/SHAP are often dismissed by clinicians and patients as uninterpretable. While the reasoning LLMs initially seemed promising for generating natural-language explanations, they often create unfaithful rationalizations that obscure a model’s true logic. To solve this,…
