• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

可解释的推荐:设计与信任校准相遇时

Explainable recommendation: when design meets trust calibration.

作者信息

Naiseh Mohammad, Al-Thani Dena, Jiang Nan, Ali Raian

机构信息

Faculty of Science and Technology, Bournemouth University, Fern Barrow, Poole, BH12 5BB UK.

College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar.

出版信息

World Wide Web. 2021;24(5):1857-1884. doi: 10.1007/s11280-021-00916-0. Epub 2021 Aug 2.

DOI:10.1007/s11280-021-00916-0
PMID:34366701
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8327305/
Abstract

Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.

摘要

人机协作决策工具正日益应用于医疗保健等关键领域。然而,这些工具对于人类决策者而言往往被视为封闭且不透明的。其成功的一个基本要求是能够向用户提供易于理解且有意义的关于自身的解释。虽然解释通常具有积极含义,但研究表明,用户与这些解释进行交互并参与其中背后的假设可能会引入信任校准错误,例如促使对人工智能建议达成非理性或欠思考的同意或不同意。在本文中,我们探讨如何通过解释交互设计来帮助进行信任校准。我们的研究方法包括两个主要阶段。我们首先对16名参与者进行了出声思考研究,旨在揭示关于人机协作决策工具中可解释性的主要信任校准错误。然后,我们与8名参与者进行了两次协同设计会议,以确定有助于信任校准的解释的设计原则和技术。作为我们研究的结论,我们提供了五条设计原则:为参与度设计、挑战习惯行为、注意力引导、摩擦与支持训练及学习。我们的研究结果旨在为以信任校准为首要目标设计解释的更综合框架铺平道路。

相似文献

1
Explainable recommendation: when design meets trust calibration.可解释的推荐:设计与信任校准相遇时
World Wide Web. 2021;24(5):1857-1884. doi: 10.1007/s11280-021-00916-0. Epub 2021 Aug 2.
2
Integrating Explainable Machine Learning in Clinical Decision Support Systems: Study Involving a Modified Design Thinking Approach.将可解释机器学习集成到临床决策支持系统中:一项采用改进设计思维方法的研究。
JMIR Form Res. 2024 Apr 16;8:e50475. doi: 10.2196/50475.
3
First impressions of a financial AI assistant: differences between high trust and low trust users.金融人工智能助手的第一印象:高信任度用户与低信任度用户之间的差异。
Front Artif Intell. 2023 Oct 3;6:1241290. doi: 10.3389/frai.2023.1241290. eCollection 2023.
4
An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study.用于体重管理专家的可解释人工智能软件工具(PRIMO):混合方法研究。
J Med Internet Res. 2023 Sep 6;25:e42047. doi: 10.2196/42047.
5
Putting explainable AI in context: institutional explanations for medical AI.将可解释人工智能置于背景之中:医学人工智能的机构性解释
Ethics Inf Technol. 2022;24(2):23. doi: 10.1007/s10676-022-09649-8. Epub 2022 May 6.
6
Examining explainable clinical decision support systems with think aloud protocols.使用出声思维协议来检验可解释的临床决策支持系统。
PLoS One. 2023 Sep 14;18(9):e0291443. doi: 10.1371/journal.pone.0291443. eCollection 2023.
7
Population Preferences for Performance and Explainability of Artificial Intelligence in Health Care: Choice-Based Conjoint Survey.人群对医疗人工智能性能和可解释性的偏好:基于选择的联合调查。
J Med Internet Res. 2021 Dec 13;23(12):e26611. doi: 10.2196/26611.
8
Medical Informatics in a Tension Between Black-Box AI and Trust.医疗信息学在黑盒 AI 与信任之间的紧张关系
Stud Health Technol Inform. 2022 Jan 14;289:41-44. doi: 10.3233/SHTI210854.
9
Medically-oriented design for explainable AI for stress prediction from physiological measurements.面向医学的可解释 AI 设计,用于从生理测量中预测压力。
BMC Med Inform Decis Mak. 2022 Feb 11;22(1):38. doi: 10.1186/s12911-022-01772-2.
10
Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions.为信心而设计:可视化人工智能决策的影响
Front Neurosci. 2022 Jun 24;16:883385. doi: 10.3389/fnins.2022.883385. eCollection 2022.

引用本文的文献

1
How Explainable Artificial Intelligence Can Increase or Decrease Clinicians' Trust in AI Applications in Health Care: Systematic Review.可解释人工智能如何增加或降低临床医生对医疗保健中人工智能应用的信任:系统评价
JMIR AI. 2024 Oct 30;3:e53207. doi: 10.2196/53207.
2
False conflict and false confirmation errors are crucial components of AI accuracy in medical decision making.虚假冲突和虚假确认错误是人工智能在医疗决策中准确性的关键组成部分。
Nat Commun. 2024 Aug 13;15(1):6896. doi: 10.1038/s41467-024-50952-3.
3
Effects of reliability indicators on usage, acceptance and preference of predictive process management decision support systems.

本文引用的文献

1
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI.通过可解释且能感知不确定性的人工智能实现快速信任校准
Patterns (N Y). 2020 Jul 10;1(4):100049. doi: 10.1016/j.patter.2020.100049.
2
Physicians' responses to clinical decision support on an intensive care unit--comparison of four different alerting methods.医生对重症监护病房临床决策支持的反应——四种不同警报方法的比较。
Artif Intell Med. 2013 Sep;59(1):33-8. doi: 10.1016/j.artmed.2013.05.002. Epub 2013 Jun 6.
3
Developing and implementing clinical decision support for use in a computerized prescriber-order-entry system.
可靠性指标对预测性过程管理决策支持系统的使用、接受度和偏好的影响。
Qual User Exp. 2022;7(1):6. doi: 10.1007/s41233-022-00053-0. Epub 2022 Sep 5.
开发和实施临床决策支持,用于计算机化的医嘱录入系统。
Am J Health Syst Pharm. 2010 Mar 1;67(5):391-400. doi: 10.2146/ajhp090153.
4
Tiering drug-drug interaction alerts by severity increases compliance rates.根据严重程度对药物相互作用警报进行分级可提高依从率。
J Am Med Inform Assoc. 2009 Jan-Feb;16(1):40-6. doi: 10.1197/jamia.M2808. Epub 2008 Oct 24.
5
Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control.设计人与自动化之间的灵活交互:监督控制的委托接口
Hum Factors. 2007 Feb;49(1):57-75. doi: 10.1518/001872007779598037.
6
Changing circumstances, disrupting habits.环境变迁,习惯扰乱。
J Pers Soc Psychol. 2005 Jun;88(6):918-933. doi: 10.1037/0022-3514.88.6.918.
7
Trust in automation: designing for appropriate reliance.对自动化的信任:设计适度的依赖。
Hum Factors. 2004 Spring;46(1):50-80. doi: 10.1518/hfes.46.1.50_30392.
8
Order and disorder in everyday action: the roles of contention scheduling and supervisory attention.日常行动中的秩序与混乱:竞争调度和监督性注意的作用。
Neurocase. 2002;8(1-2):61-79. doi: 10.1093/neucas/8.1.61.
9
On the costs of accessible attitudes: detecting that the attitude object has changed.
J Pers Soc Psychol. 2000 Feb;78(2):197-210. doi: 10.1037//0022-3514.78.2.197.
10
Clinical reasoning about new symptoms despite preexisting disease: sources of error and order effects.尽管存在既往疾病,但对新症状的临床推理:错误来源和顺序效应。
Fam Med. 1995 May;27(5):314-20.