Shimojo Asaya, Miwa Kazuhisa, Terai Hitoshi
Department of Cognitive and Psychological Sciences, Graduate School of Informatics, Nagoya University, Nagoya, Japan.
Department of Information and Computer Science, Faculty of Humanity-Oriented Science and Engineering, Kindai University, Higashi-osaka, Japan.
Front Psychol. 2020 Dec 9;11:575746. doi: 10.3389/fpsyg.2020.575746. eCollection 2020.
It is important to reveal how humans evaluate an explanation of the recent development of explainable artificial intelligence. So, what makes people feel that one explanation is more likely than another? In the present study, we examine how explanatory virtues affect the process of estimating subjective posterior probability. Through systematically manipulating two virtues, Simplicity-the number of causes used to explain effects-and Scope-the number of effects predicted by causes-in three different conditions, we clarified two points in Experiment 1: (i) that Scope's effect is greater than Simplicity's; and (ii) that these virtues affect the outcome independently. In Experiment 2, we found that instruction about the explanatory structure increased the impact of both virtues' effects but especially that of Simplicity. These results suggest that Scope predominantly affects the estimation of subjective posterior probability, but that, if perspective on the explanatory structure is provided, Simplicity can also affect probability estimation.
揭示人类如何评估对可解释人工智能最新发展的解释非常重要。那么,是什么让人们觉得一种解释比另一种更有可能呢?在本研究中,我们考察了解释性优点如何影响主观后验概率的估计过程。通过在三种不同条件下系统地操纵两个优点,即简单性(用于解释效果的原因数量)和范围(原因预测的效果数量),我们在实验1中阐明了两点:(i)范围的影响大于简单性的影响;(ii)这些优点独立地影响结果。在实验2中,我们发现关于解释结构的指导增加了两种优点的影响,但特别是简单性的影响。这些结果表明,范围主要影响主观后验概率的估计,但是,如果提供了关于解释结构的视角,简单性也可以影响概率估计。