Department of Psychology, University of Milano-Bicocca, 20126, Milan, Italy.
Milan Center for Neuroscience, University of Milano-Bicocca, 20126, Milan, Italy.
Sci Rep. 2022 Sep 2;12(1):14952. doi: 10.1038/s41598-022-18751-2.
Artificial Intelligence (AI) systems are precious support for decision-making, with many applications also in the medical domain. The interaction between MDs and AI enjoys a renewed interest following the increased possibilities of deep learning devices. However, we still have limited evidence-based knowledge of the context, design, and psychological mechanisms that craft an optimal human-AI collaboration. In this multicentric study, 21 endoscopists reviewed 504 videos of lesions prospectively acquired from real colonoscopies. They were asked to provide an optical diagnosis with and without the assistance of an AI support system. Endoscopists were influenced by AI ([Formula: see text]), but not erratically: they followed the AI advice more when it was correct ([Formula: see text]) than incorrect ([Formula: see text]). Endoscopists achieved this outcome through a weighted integration of their and the AI opinions, considering the case-by-case estimations of the two reliabilities. This Bayesian-like rational behavior allowed the human-AI hybrid team to outperform both agents taken alone. We discuss the features of the human-AI interaction that determined this favorable outcome.
人工智能(AI)系统是决策的宝贵支持,在医学领域也有许多应用。深度学习设备可能性的增加使得医生和 AI 之间的交互重新受到关注。然而,我们对于构建最佳人机协作的上下文、设计和心理机制仍然只有有限的循证知识。在这项多中心研究中,21 名内镜医生前瞻性地对 504 个来自真实结肠镜检查的病变视频进行了评估。他们被要求在有和没有 AI 支持系统的情况下提供光学诊断。AI 影响了内镜医生([公式:见文本]),但并非无规律地影响:当 AI 建议正确时([公式:见文本]),他们更倾向于遵循 AI 的建议,而当 AI 建议错误时([公式:见文本])则不会。内镜医生通过对其和 AI 意见进行加权整合来实现这一结果,同时考虑到两个可靠性的逐案估计。这种类似贝叶斯的理性行为使人机混合团队的表现优于单独的两个代理。我们讨论了决定这一有利结果的人机交互特征。