Gallée Luisa, Lisson Catharina Silvia, Ropinski Timo, Beer Meinrad, Götz Michael
Experimental Radiology, Ulm University Medical Center, Germany, Ulm, Germany.
XAIRAD-Cooperation for Artificial Intelligence in Experimental Radiology, Germany, Ulm, Germany.
PeerJ Comput Sci. 2025 May 29;11:e2908. doi: 10.7717/peerj-cs.2908. eCollection 2025.
Explainable artificial intelligence (xAI) is becoming increasingly important as the need for understanding the model's reasoning grows when applying them in high-risk areas. This is especially crucial in the field of medicine, where decision support systems are utilised to make diagnoses or to determine appropriate therapies. Here it is essential to provide intuitive and comprehensive explanations to evaluate the system's correctness. To meet this need, we have developed Proto-Caps, an intrinsically explainable model for image classification. It explains its decisions by providing visual prototypes that resemble specific appearance features. These characteristics are predefined by humans, which on the one hand makes them understandable and on the other hand leads to the model basing its decision on the same features as the human expert. On two public datasets, this method shows better performance compared to existing explainable approaches, despite the additive explainability modality through the visual prototypes. In addition to the performance evaluations, we conducted an analysis of truthfulness by examining the joint information between the target prediction and its explanation output. This was done in order to ensure that the explanation actually reasons the target classification. Through extensive hyperparameter studies, we also found optimal model settings, providing a starting point for further research. Our work emphasises the prospects of combining xAI approaches for greater explainability and demonstrates that incorporating explainability does not necessarily lead to a loss of performance.
随着在高风险领域应用模型时对理解其推理过程的需求不断增加,可解释人工智能(xAI)变得越来越重要。这在医学领域尤为关键,因为医学领域利用决策支持系统进行诊断或确定合适的治疗方法。在此,提供直观且全面的解释以评估系统的正确性至关重要。为满足这一需求,我们开发了Proto-Caps,一种用于图像分类的内在可解释模型。它通过提供类似于特定外观特征的视觉原型来解释其决策。这些特征由人类预先定义,这一方面使其易于理解,另一方面导致模型基于与人类专家相同的特征做出决策。在两个公共数据集上,尽管通过视觉原型具有附加的可解释性模态,但该方法与现有的可解释方法相比仍表现出更好的性能。除了性能评估外,我们还通过检查目标预测与其解释输出之间的联合信息对真实性进行了分析。这样做是为了确保解释实际上能为目标分类提供推理依据。通过广泛的超参数研究,我们还找到了最优的模型设置,为进一步研究提供了一个起点。我们的工作强调了结合xAI方法以实现更高可解释性的前景,并表明纳入可解释性不一定会导致性能损失。