Suppr超能文献

选择人类医生还是 AI 医生?比较信任关联和知识如何与 AI 在医疗保健中的风险和收益感知相关。

Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare.

机构信息

Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland.

出版信息

Risk Anal. 2024 Apr;44(4):939-957. doi: 10.1111/risa.14216. Epub 2023 Sep 18.

Abstract

The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted.

摘要

人工智能(AI)在医疗保健领域的发展正在迅速加速。除了对技术优化的渴望之外,公众对这些技术应用的看法和偏好也知之甚少。对新技术的风险和收益的看法是成功实施的关键驱动因素。因此,了解影响这些看法的因素至关重要。在这项研究中,我们借鉴风险感知和人机交互文献,研究了与 AI 相比,医生的明确(即深思熟虑的)和隐含(即自动的)比较信任关联,以及对 AI 的了解如何与 AI 在医疗保健中的风险和收益的可能性感知以及对 AI 在医疗保健中的整合偏好相关。我们使用调查数据(N=378)来指定路径模型。结果表明,隐含比较信任关联对 AI 相对于医生的相对偏好的路径仅通过风险显著,但通过收益感知不显著。对于 AI 知识则相反。明确的比较信任关联通过风险和收益感知与 AI 偏好相关。这些发现表明,医疗保健中 AI 的风险感知可能更多地受到情感因素的驱动,而不是收益感知,而收益感知反过来可能更多地依赖于反思认知。考虑到信任的概念化是启发式的,以及判断和决策的双过程理论,我们讨论了研究结果的含义和未来研究的方向。关于基于 AI 的医疗技术的设计和实施,我们的研究结果表明,有必要全面整合公众的观点。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验