Kunar Melina A, Montana Giovanni, Watson Derrick G
Department of Psychology, The University of Warwick, Coventry, CV4 7AL, UK.
Department of Statistics, The University of Warwick, Coventry, CV4 7AL, UK.
Psychon Bull Rev. 2025 Apr;32(2):951-960. doi: 10.3758/s13423-024-02601-5. Epub 2024 Oct 24.
Recent developments in artificial intelligence (AI) have led to changes in healthcare. Government and regulatory bodies have advocated the need for transparency in AI systems with recommendations to provide users with more details about AI accuracy and how AI systems work. However, increased transparency could lead to negative outcomes if humans become overreliant on the technology. This study investigated how changes in AI transparency affected human decision-making in a medical-screening visual search task. Transparency was manipulated by either giving or withholding knowledge about the accuracy of an 'AI system'. We tested performance in seven simulated lab mammography tasks, in which observers searched for a cancer which could be correctly or incorrectly flagged by computer-aided detection (CAD) 'AI prompts'. Across tasks, the CAD systems varied in accuracy. In the 'transparent' condition, participants were told the accuracy of the CAD system, in the 'not transparent' condition, they were not. The results showed that increasing CAD transparency impaired task performance, producing an increase in false alarms, decreased sensitivity, an increase in recall rate, and a decrease in positive predictive value. Along with increasing investment in AI, this research shows that it is important to investigate how transparency of AI systems affect human decision-making. Increased transparency may lead to overtrust in AI systems, which can impact clinical outcomes.
人工智能(AI)的最新发展给医疗保健带来了变革。政府和监管机构主张人工智能系统需要具备透明度,并建议向用户提供有关人工智能准确性以及人工智能系统如何运作的更多详细信息。然而,如果人类过度依赖这项技术,透明度的提高可能会导致负面结果。本研究调查了人工智能透明度的变化如何影响医疗筛查视觉搜索任务中的人类决策。通过提供或不提供关于“人工智能系统”准确性的知识来操纵透明度。我们在七个模拟的实验室乳房X光检查任务中测试了表现,在这些任务中,观察者要寻找可能被计算机辅助检测(CAD)“人工智能提示”正确或错误标记的癌症。在所有任务中,CAD系统的准确性各不相同。在“透明”条件下,参与者被告知CAD系统的准确性,在“不透明”条件下,则未被告知。结果表明,提高CAD透明度会损害任务表现,导致误报增加、灵敏度降低、召回率增加以及阳性预测值降低。随着对人工智能投资的增加,这项研究表明,研究人工智能系统的透明度如何影响人类决策非常重要。透明度的提高可能会导致对人工智能系统的过度信任,进而影响临床结果。