Suermondt H J, Cooper G F
Section on Medical Informatics, Stanford University, CA.
Proc Annu Symp Comput Appl Med Care. 1992:579-85.
Providing explanations of the conclusions of decision-support systems can be viewed as presenting inference results in a manner that enhances the user's insight into how these results were obtained. The ability to explain inferences has been demonstrated to be an important factor in making medical decision-support systems acceptable for clinical use. Although many researchers in artificial intelligence have explored the automatic generation of explanations for decision-support systems based on symbolic reasoning, research in automated explanation of probabilistic results has been limited. We present the results of an an evaluation study of INSITE, a program that explains the reasoning of decision-support systems based on Bayesian belief networks. In the domain of anesthesia, we compared subjects who had access to a belief network with explanations of the inference results, to control subjects who used the same belief network without explanations. We show that, compared to control subjects, the explanation subjects demonstrated greater diagnostic accuracy, were more confident about their conclusions, were more critical of the belief network, and found the presentation of the inference results more clear.
提供决策支持系统的结论解释可以被视为以一种增强用户对这些结果如何得出的洞察力的方式呈现推理结果。事实证明,解释推理的能力是使医学决策支持系统被临床接受的一个重要因素。尽管许多人工智能研究人员已经探索了基于符号推理的决策支持系统解释的自动生成,但概率结果的自动解释研究却很有限。我们展示了对INSITE的一项评估研究结果,INSITE是一个基于贝叶斯信念网络解释决策支持系统推理的程序。在麻醉领域,我们将能够获取带有推理结果解释的信念网络的受试者与使用相同信念网络但没有解释的对照受试者进行了比较。我们表明,与对照受试者相比,有解释的受试者表现出更高的诊断准确性,对自己的结论更有信心,对信念网络更挑剔,并且认为推理结果的呈现更清晰。