Suppr超能文献

解释对非专业人士信任人工智能驱动的症状检查应用程序的影响:实验研究。

The Impact of Explanations on Layperson Trust in Artificial Intelligence-Driven Symptom Checker Apps: Experimental Study.

机构信息

Oxford Internet Institute, University of Oxford, Oxford, United Kingdom.

出版信息

J Med Internet Res. 2021 Nov 3;23(11):e29386. doi: 10.2196/29386.

Abstract

BACKGROUND

Artificial intelligence (AI)-driven symptom checkers are available to millions of users globally and are advocated as a tool to deliver health care more efficiently. To achieve the promoted benefits of a symptom checker, laypeople must trust and subsequently follow its instructions. In AI, explanations are seen as a tool to communicate the rationale behind black-box decisions to encourage trust and adoption. However, the effectiveness of the types of explanations used in AI-driven symptom checkers has not yet been studied. Explanations can follow many forms, including why-explanations and how-explanations. Social theories suggest that why-explanations are better at communicating knowledge and cultivating trust among laypeople.

OBJECTIVE

The aim of this study is to ascertain whether explanations provided by a symptom checker affect explanatory trust among laypeople and whether this trust is impacted by their existing knowledge of disease.

METHODS

A cross-sectional survey of 750 healthy participants was conducted. The participants were shown a video of a chatbot simulation that resulted in the diagnosis of either a migraine or temporal arteritis, chosen for their differing levels of epidemiological prevalence. These diagnoses were accompanied by one of four types of explanations. Each explanation type was selected either because of its current use in symptom checkers or because it was informed by theories of contrastive explanation. Exploratory factor analysis of participants' responses followed by comparison-of-means tests were used to evaluate group differences in trust.

RESULTS

Depending on the treatment group, two or three variables were generated, reflecting the prior knowledge and subsequent mental model that the participants held. When varying explanation type by disease, migraine was found to be nonsignificant (P=.65) and temporal arteritis, marginally significant (P=.09). Varying disease by explanation type resulted in statistical significance for input influence (P=.001), social proof (P=.049), and no explanation (P=.006), with counterfactual explanation (P=.053). The results suggest that trust in explanations is significantly affected by the disease being explained. When laypeople have existing knowledge of a disease, explanations have little impact on trust. Where the need for information is greater, different explanation types engender significantly different levels of trust. These results indicate that to be successful, symptom checkers need to tailor explanations to each user's specific question and discount the diseases that they may also be aware of.

CONCLUSIONS

System builders developing explanations for symptom-checking apps should consider the recipient's knowledge of a disease and tailor explanations to each user's specific need. Effort should be placed on generating explanations that are personalized to each user of a symptom checker to fully discount the diseases that they may be aware of and to close their information gap.

摘要

背景

人工智能(AI)驱动的症状检查器可供全球数百万用户使用,并被倡导为提高医疗保健效率的工具。为了实现症状检查器的预期益处,非专业人士必须信任并随后遵循其指示。在 AI 中,解释被视为向非专业人士传达黑盒决策背后的基本原理的工具,以鼓励信任和采用。然而,AI 驱动的症状检查器中使用的解释类型的有效性尚未得到研究。解释可以有多种形式,包括为什么解释和如何解释。社会理论表明,为什么解释在向非专业人士传达知识和培养信任方面更有效。

目的

本研究旨在确定症状检查器提供的解释是否会影响非专业人士的解释信任,以及这种信任是否会受到他们对疾病现有知识的影响。

方法

对 750 名健康参与者进行了横断面调查。参与者观看了一个聊天机器人模拟视频,该视频导致偏头痛或颞动脉炎的诊断,选择这两种疾病是因为它们在流行病学上的流行程度不同。这些诊断伴随着四种类型的解释之一。每种解释类型的选择要么是因为它当前在症状检查器中的使用,要么是因为它是基于对比解释理论的。对参与者反应的探索性因子分析,然后是均值比较检验,用于评估信任方面的组间差异。

结果

根据治疗组,生成了两个或三个变量,反映了参与者持有的先前知识和随后的心理模型。当按疾病变化解释类型时,偏头痛不显著(P=.65),颞动脉炎则略有显著(P=.09)。按解释类型变化疾病时,输入影响(P=.001)、社会证明(P=.049)和无解释(P=.006)具有统计学意义,反事实解释(P=.053)。结果表明,解释的信任度受到要解释的疾病的显著影响。当非专业人士对一种疾病有现有知识时,解释对信任的影响很小。在需要信息的地方,不同的解释类型会产生显著不同程度的信任。这些结果表明,为了取得成功,症状检查器需要根据每个用户的具体问题调整解释,并降低他们可能已经了解的疾病的权重。

结论

为症状检查应用程序开发解释的系统构建者应该考虑接受者对疾病的了解,并根据每个用户的特定需求调整解释。应该努力生成针对每个症状检查器用户个性化的解释,以充分降低他们可能已经了解的疾病的权重,并缩小他们的信息差距。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3157/8600426/060f5275a3f7/jmir_v23i11e29386_fig1.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验