Qi Ruoxi, Zheng Yueyuan, Yang Yi, Cao Caleb Chen, Hsiao Janet H
Department of Psychology, University of Hong Kong, Hong Kong SAR, China.
Huawei Research Hong Kong, Hong Kong SAR, China.
Br J Psychol. 2024 Jun 10. doi: 10.1111/bjop.12714.
Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.
可解释人工智能(XAI)方法能够对人工智能模型作出解释,但我们对这些解释与人类解释的比较方式的理解仍然有限。在此,我们通过眼动追踪研究了人类参与者在对图像进行分类以及解释其分类方式时的注意力策略,并将他们的注意力策略与当前XAI方法基于显著性的解释进行了比较。我们发现,与分类任务本身相比,人类在解释任务中采用了更多探索性的注意力策略。通过聚类识别出了两种具有代表性的解释策略:一种是对前景物体进行集中视觉扫描并给出更多概念性解释,其中包含用于推断类别标签的更具体信息;另一种是进行探索性扫描并给出更多视觉解释,这些解释在早期类别学习的有效性方面得分更高。有趣的是,XAI显著性图解释与人类的探索性注意力策略最为相似,并且通过扰动调用可观察因果关系来突出判别特征的解释比那些突出与更高类别分数相关的内部特征的解释与人类策略具有更高的相似性。因此,人类在解释过程中同时使用视觉和概念信息,它们服务于不同目的,并且突出可观察因果关系特征的XAI方法与人类解释的匹配度更高,可能对用户更具可及性。