Genc Yegin, Ahsen Mehmet Eren, Zhang Zhan
Seidenberg School of Computer Science and Information Systems, Pace University, New York, New York, United States of America.
Department of Business Administration, University of Illinois at Urbana-Champaign, Champaign, Illinois, United States of America.
PLoS One. 2025 Sep 10;20(9):e0321342. doi: 10.1371/journal.pone.0321342. eCollection 2025.
While there has been extensive research on techniques for explainable artificial intelligence (XAI) to enhance AI recommendations, the metacognitive processes in interacting with AI explanations remain underexplored. This study examines how AI explanations impact human decision-making by leveraging cognitive mechanisms that evaluate the accuracy of AI recommendations. We conducted a large-scale experiment (N = 4,302) on Amazon Mechanical Turk (AMT), where participants classified radiology reports as normal or abnormal. Participants were randomly assigned to three groups: a) no AI input (control group), b) AI prediction only, and c) AI prediction with explanation. Our results indicate that AI explanations enhanced task performance. Our results indicate that explanations are more effective when AI prediction confidence is high or users' self-confidence is low. We conclude by discussing the implications of our findings.
虽然已经对可解释人工智能(XAI)技术进行了广泛研究以增强人工智能推荐,但与人工智能解释交互过程中的元认知过程仍未得到充分探索。本研究通过利用评估人工智能推荐准确性的认知机制,考察人工智能解释如何影响人类决策。我们在亚马逊土耳其机器人(AMT)上进行了一项大规模实验(N = 4302),参与者将放射学报告分类为正常或异常。参与者被随机分为三组:a)无人工智能输入(对照组),b)仅人工智能预测,c)带解释的人工智能预测。我们的结果表明,人工智能解释提高了任务表现。我们的结果表明,当人工智能预测置信度高或用户自信心低时,解释更有效。我们通过讨论研究结果的意义来得出结论。