• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可解释性增强了智能代理中的信任恢复能力。

Explainability increases trust resilience in intelligent agents.

作者信息

Xu Min, Wang Yiwen

机构信息

School of Economics and Management, Fuzhou University, Fuzhou, China.

School of Business Administration, Zhejiang Gongshang University, Hangzhou, China.

出版信息

Br J Psychol. 2024 Oct 21. doi: 10.1111/bjop.12740.

DOI:10.1111/bjop.12740
PMID:39431949
Abstract

Even though artificial intelligence (AI)-based systems typically outperform human decision-makers, they are not immune to errors, leading users to lose trust in them and be less likely to use them again-a phenomenon known as algorithm aversion. The purpose of the present research was to investigate whether explainable AI (XAI) could function as a viable strategy to counter algorithm aversion. We conducted two experiments to examine how XAI influences users' willingness to continue using AI-based systems when these systems exhibit errors. The results showed that, following the observation of algorithms erring, the inclination of users to delegate decisions to or follow advice from intelligent agents significantly decreased compared to the period before the errors were revealed. However, the explainability effectively mitigated this decline, with users in the XAI condition being more likely to continue utilizing intelligent agents for subsequent tasks after seeing algorithms erring than those in the non-XAI condition. We further found that the explainability could reduce users' decision regret, and the decrease in decision regret mediated the relationship between the explainability and re-use behaviour. These findings underscore the adaptive function of XAI in alleviating negative user experiences and maintaining user trust in the context of imperfect AI.

摘要

尽管基于人工智能(AI)的系统通常比人类决策者表现更出色,但它们并非不会出错,这导致用户对其失去信任,再次使用它们的可能性降低——这种现象被称为算法厌恶。本研究的目的是调查可解释人工智能(XAI)是否可以作为应对算法厌恶的可行策略。我们进行了两项实验,以研究当基于AI的系统出现错误时,XAI如何影响用户继续使用这些系统的意愿。结果表明,在观察到算法出错后,与错误暴露之前相比,用户将决策委托给智能代理或听从其建议的倾向显著降低。然而,可解释性有效地减轻了这种下降,与非XAI条件下的用户相比,处于XAI条件下的用户在看到算法出错后更有可能在后续任务中继续使用智能代理。我们进一步发现,可解释性可以减少用户的决策后悔,并且决策后悔的减少介导了可解释性与再使用行为之间的关系。这些发现强调了XAI在减轻不完美AI背景下的负面用户体验和维持用户信任方面的适应性功能。

相似文献

1
Explainability increases trust resilience in intelligent agents.可解释性增强了智能代理中的信任恢复能力。
Br J Psychol. 2024 Oct 21. doi: 10.1111/bjop.12740.
2
A Vision on User-Centered Implementation and Evaluation of Explainable AI for Predicting Hospital-Onset Bacteremia.关于以用户为中心的可解释 AI 在预测医院获得性菌血症中的实施和评估的展望。
Stud Health Technol Inform. 2024 Aug 22;316:766-770. doi: 10.3233/SHTI240525.
3
Evaluating Explainable Artificial Intelligence (XAI) techniques in chest radiology imaging through a human-centered Lens.从以人为中心的角度评估胸部放射影像学中的可解释人工智能(XAI)技术。
PLoS One. 2024 Oct 9;19(10):e0308758. doi: 10.1371/journal.pone.0308758. eCollection 2024.
4
Effects of explainable artificial intelligence in neurology decision support.可解释人工智能在神经病学决策支持中的作用。
Ann Clin Transl Neurol. 2024 May;11(5):1224-1235. doi: 10.1002/acn3.52036. Epub 2024 Apr 5.
5
Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI).使用可解释人工智能(XAI)解释空气处理单元故障分类器的可解释性和透明度。
Sensors (Basel). 2022 Aug 23;22(17):6338. doi: 10.3390/s22176338.
6
Explainable AI in medical imaging: An overview for clinical practitioners - Saliency-based XAI approaches.可解释人工智能在医学影像中的应用:临床医师的概述——基于显著度的 XAI 方法。
Eur J Radiol. 2023 May;162:110787. doi: 10.1016/j.ejrad.2023.110787. Epub 2023 Mar 21.
7
Designing explainable AI to improve human-AI team performance: A medical stakeholder-driven scoping review.设计可解释的人工智能以提高人机团队绩效:医学利益相关者驱动的范围综述。
Artif Intell Med. 2024 Mar;149:102780. doi: 10.1016/j.artmed.2024.102780. Epub 2024 Jan 20.
8
A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System.医学可解释人工智能(XAI)调查:最新进展、可解释性方法、人机交互和评分系统。
Sensors (Basel). 2022 Oct 21;22(20):8068. doi: 10.3390/s22208068.
9
Explanatory pragmatism: a context-sensitive framework for explainable medical AI.解释性实用主义:一个用于可解释医学人工智能的上下文敏感框架。
Ethics Inf Technol. 2022;24(1):13. doi: 10.1007/s10676-022-09632-3. Epub 2022 Feb 28.
10
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma.皮肤科医生般的可解释人工智能增强了对黑色素瘤诊断的信任和信心。
Nat Commun. 2024 Jan 15;15(1):524. doi: 10.1038/s41467-023-43095-4.

引用本文的文献

1
Mediating effect of AI attitudes and AI literacy on the relationship between career self-efficacy and job-seeking anxiety.人工智能态度和人工智能素养在职业自我效能感与求职焦虑关系中的中介作用。
BMC Psychol. 2025 Apr 30;13(1):454. doi: 10.1186/s40359-025-02757-2.