Kim Bohye, Ryan Katie, Kim Jane Paik
Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California, USA.
Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California, USA
J Med Ethics. 2024 Dec 12. doi: 10.1136/jme-2024-110080.
It is increasingly recognised that the success of artificial intelligence-based clinical decision support (AI/CDS) tools will depend on physician and patient trust, but factors impacting patients' views on clinical care reliant on AI have been less explored.
This pilot study explores whether, and in what contexts, detail of explanation provided about AI/CDS tools impacts patients' attitudes toward the tools and their clinical care.
We designed a Sequential Multiple Assignment Randomized Trial vignette web-based survey. Participants recruited through Amazon Mechanical Turk were presented with hypothetical vignettes describing health concerns and were sequentially randomised along three factors: (1) the level of detail of explanation regarding an AI/CDS tool; (2) the AI/CDS result; and (3) the physician's level of agreement with the AI/CDS result. We compared mean ratings of comfort and confidence by the level of detail of explanation using t-tests. Regression models were fit to confirm conditional effects of detail of explanation.
The detail of explanation provided regarding the AI/CDS tools was positively related to respondents' comfort and confidence in the usage of the tools and their perception of the physician's final decision. The effects of detail of explanation on their perception of the physician's final decision were different given the AI/CDS result and the physician's agreement or disagreement with the result.
More information provided by physicians regarding the use of AI/CDS tools may improve patient attitudes toward healthcare involving AI/CDS tools in general and in certain contexts of the AI/CDS result and physician agreement.
人们越来越认识到,基于人工智能的临床决策支持(AI/CDS)工具的成功将取决于医生和患者的信任,但影响患者对依赖人工智能的临床护理看法的因素尚未得到充分探索。
这项试点研究探讨了关于AI/CDS工具提供的解释细节是否以及在何种情况下会影响患者对这些工具及其临床护理的态度。
我们设计了一项基于网络调查的序贯多重赋值随机试验 vignette。通过亚马逊土耳其机器人招募的参与者看到了描述健康问题的假设 vignette,并按照三个因素进行序贯随机分组:(1)关于AI/CDS工具的解释细节程度;(2)AI/CDS结果;(3)医生对AI/CDS结果的同意程度。我们使用t检验比较了不同解释细节程度下的舒适度和信心平均评分。拟合回归模型以确认解释细节的条件效应。
关于AI/CDS工具提供的解释细节与受访者使用这些工具的舒适度和信心以及他们对医生最终决定的看法呈正相关。考虑到AI/CDS结果以及医生对结果的同意或不同意,解释细节对他们对医生最终决定的看法的影响有所不同。
医生提供的关于使用AI/CDS工具的更多信息可能会改善患者对涉及AI/CDS工具的医疗保健的态度,总体而言,以及在AI/CDS结果和医生同意的某些情况下。