• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人类的解释策略与当前可解释人工智能的比较:图像分类的见解。

Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification.

作者信息

Qi Ruoxi, Zheng Yueyuan, Yang Yi, Cao Caleb Chen, Hsiao Janet H

机构信息

Department of Psychology, University of Hong Kong, Hong Kong SAR, China.

Huawei Research Hong Kong, Hong Kong SAR, China.

出版信息

Br J Psychol. 2024 Jun 10. doi: 10.1111/bjop.12714.

DOI:10.1111/bjop.12714
PMID:38858823
Abstract

Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.

摘要

可解释人工智能(XAI)方法能够对人工智能模型作出解释,但我们对这些解释与人类解释的比较方式的理解仍然有限。在此,我们通过眼动追踪研究了人类参与者在对图像进行分类以及解释其分类方式时的注意力策略,并将他们的注意力策略与当前XAI方法基于显著性的解释进行了比较。我们发现,与分类任务本身相比,人类在解释任务中采用了更多探索性的注意力策略。通过聚类识别出了两种具有代表性的解释策略:一种是对前景物体进行集中视觉扫描并给出更多概念性解释,其中包含用于推断类别标签的更具体信息;另一种是进行探索性扫描并给出更多视觉解释,这些解释在早期类别学习的有效性方面得分更高。有趣的是,XAI显著性图解释与人类的探索性注意力策略最为相似,并且通过扰动调用可观察因果关系来突出判别特征的解释比那些突出与更高类别分数相关的内部特征的解释与人类策略具有更高的相似性。因此,人类在解释过程中同时使用视觉和概念信息,它们服务于不同目的,并且突出可观察因果关系特征的XAI方法与人类解释的匹配度更高,可能对用户更具可及性。

相似文献

1
Explanation strategies in humans versus current explainable artificial intelligence: Insights from image classification.人类的解释策略与当前可解释人工智能的比较:图像分类的见解。
Br J Psychol. 2024 Jun 10. doi: 10.1111/bjop.12714.
2
Human attention guided explainable artificial intelligence for computer vision models.人类注意力引导的计算机视觉模型可解释人工智能。
Neural Netw. 2024 Sep;177:106392. doi: 10.1016/j.neunet.2024.106392. Epub 2024 May 15.
3
Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review.可解释人工智能在医疗保健中的基本属性和解释效果:一项系统综述。
Heliyon. 2023 May 8;9(5):e16110. doi: 10.1016/j.heliyon.2023.e16110. eCollection 2023 May.
4
ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.EXAID:一种用于皮肤损伤计算机辅助诊断的多模态解释框架。
Comput Methods Programs Biomed. 2022 Mar;215:106620. doi: 10.1016/j.cmpb.2022.106620. Epub 2022 Jan 5.
5
Clinical domain knowledge-derived template improves post hoc AI explanations in pneumothorax classification.临床领域知识衍生模板可提高气胸分类事后人工智能解释的质量。
J Biomed Inform. 2024 Aug;156:104673. doi: 10.1016/j.jbi.2024.104673. Epub 2024 Jun 9.
6
Framework for Classifying Explainable Artificial Intelligence (XAI) Algorithms in Clinical Medicine.临床医学中可解释人工智能(XAI)算法的分类框架。
Online J Public Health Inform. 2023 Sep 1;15:e50934. doi: 10.2196/50934. eCollection 2023.
7
Explainable AI in medical imaging: An overview for clinical practitioners - Beyond saliency-based XAI approaches.医学成像中的可解释人工智能:临床从业者概述——超越基于显著性的可解释人工智能方法
Eur J Radiol. 2023 May;162:110786. doi: 10.1016/j.ejrad.2023.110786. Epub 2023 Mar 20.
8
Explaining the black-box smoothly-A counterfactual approach.黑盒解释的平滑化——反事实方法。
Med Image Anal. 2023 Feb;84:102721. doi: 10.1016/j.media.2022.102721. Epub 2022 Dec 13.
9
CX-ToM: Counterfactual explanations with theory-of-mind for enhancing human trust in image recognition models.CX-心理理论:用于增强人类对图像识别模型信任的带有心理理论的反事实解释
iScience. 2021 Dec 11;25(1):103581. doi: 10.1016/j.isci.2021.103581. eCollection 2022 Jan 21.
10
Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI.通过 IKE-XAI 解释人工智能中的顿悟时刻:可解释人工智能的隐式知识提取。
Neural Netw. 2022 Nov;155:95-118. doi: 10.1016/j.neunet.2022.08.002. Epub 2022 Aug 6.

引用本文的文献

1
Understanding the role of eye movement pattern and consistency during face recognition through EEG decoding.通过脑电图解码了解面部识别过程中眼动模式和一致性的作用。
NPJ Sci Learn. 2025 May 12;10(1):28. doi: 10.1038/s41539-025-00316-3.