Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies (SOKENDAI), Tokyo, Japan.
Digital Content and Media Sciences Research Division, National Institute of Informatics, Tokyo, Japan.
PLoS One. 2020 Feb 21;15(2):e0229132. doi: 10.1371/journal.pone.0229132. eCollection 2020.
Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user's reliance behavior and cognitive cues called "trust calibration cues" to prompt the user to reinitiate trust calibration. We evaluated our framework and four types of trust calibration cues in an online experiment using a drone simulator. A total of 116 participants performed pothole inspection tasks by using the drone's automatic inspection, the reliability of which could fluctuate depending upon the weather conditions. The participants needed to decide whether to rely on automatic inspection or to do the inspection manually. The results showed that adaptively presenting simple cues could significantly promote trust calibration during over-trust.
人机协作的安全性和效率通常取决于人类如何适当地调整对人工智能代理的信任程度。过度信任自治系统有时会导致严重的安全问题。尽管许多研究都集中在系统透明度在保持适当信任校准方面的重要性,但检测和减轻不当信任校准的研究仍然非常有限。为了填补这些研究空白,我们提出了一种自适应信任校准方法,该方法包括一个通过监测用户的依赖行为和称为“信任校准线索”的认知线索来检测不适当的校准状态的框架,以提示用户重新开始信任校准。我们在一个使用无人机模拟器的在线实验中评估了我们的框架和四种类型的信任校准线索。共有 116 名参与者通过使用无人机的自动检查来执行坑洼检查任务,其可靠性可能会根据天气条件而波动。参与者需要决定是依赖自动检查还是手动检查。结果表明,自适应地呈现简单的线索可以显著促进过度信任期间的信任校准。