Lee Doyeon, Pruitt Joseph, Zhou Tianyu, Du Jing, Odegaard Brian
Department of Psychology, University of Florida, 945 Center Dr., P.O. Box 112250, Gainesville, FL 32611, USA.
Department of Civil and Coastal Engineering, University of Florida, Weil Hall 360, 1949 Stadium Road, Gainesville, FL 32611, USA.
PNAS Nexus. 2025 Apr 24;4(5):pgaf133. doi: 10.1093/pnasnexus/pgaf133. eCollection 2025 May.
Knowing when to trust and incorporate the advice from artificially intelligent (AI) systems is of increasing importance in the modern world. Research indicates that when AI provides high confidence ratings, human users often correspondingly increase their trust in such judgments, but these increases in trust can occur even when AI fails to provide accurate information on a given task. In this piece, we argue that measures of metacognitive sensitivity provided by AI systems will likely play a critical role in (1) helping individuals to calibrate their level of trust in these systems and (2) optimally incorporating advice from AI into human-AI hybrid decision making. We draw upon a seminal finding in the perceptual decision-making literature that demonstrates the importance of metacognitive ratings for optimal joint decisions and outline a framework to test how different types of information provided by AI systems can guide decision making.
在现代世界中,知道何时信任并采纳人工智能(AI)系统的建议变得越来越重要。研究表明,当AI给出高置信度评分时,人类用户通常会相应地增加对这些判断的信任,但即使AI在给定任务上未能提供准确信息,这种信任的增加也可能会发生。在本文中,我们认为AI系统提供的元认知敏感性度量可能会在以下两个方面发挥关键作用:(1)帮助个人校准对这些系统的信任程度;(2)在人机混合决策中最佳地纳入AI的建议。我们借鉴了感知决策文献中的一项开创性发现,该发现证明了元认知评分对于最优联合决策的重要性,并概述了一个框架,以测试AI系统提供的不同类型信息如何指导决策。