Madhavan Poornima, Wiegmann Douglas A
University of Illinois at Urbana-Champaign, Champaign, Illinois, USA.
Hum Factors. 2007 Oct;49(5):773-85. doi: 10.1518/001872007X230154.
Two experiments are described that examined operators' perceptions of decision aids.
Research has suggested certain biases against automation that influence human interaction with automation. We differentiated preconceived biases from post hoc biases and examined their effects on advice acceptance.
In Study 1 we examined operators' trust in and perceived reliability of humans versus automation of varying pedigree (expert vs. novice), based on written descriptions of these advisers prior to operators' interacting with these advisers. In Study 2 we examined participants' post hoc trust in, perceived reliability of, and dependence on these advisers after their objective experience of advisers' reliability (90% vs. 70%) in a luggage-screening task.
In Study 1 measures of perceived reliability indicated that automation was perceived as more reliable than humans across pedigrees. Measures of trust indicated that automated "novices" were trusted more than human "novices"; human "experts" were trusted more than automated "experts." In Study 2, perceived reliability varied as a function of pedigree, whereas subjective trust was always higher for automation than for humans. Advice acceptance from novice automation was always higher than from novice humans. However, when advisers were 70% reliable, errors generated by expert automation led to a drop in compliance/reliance on expert automation relative to expert humans.
Preconceived expectations of automation influence the use of these aids in actual tasks.
The results provide a reference point for deriving indices of "optimal" user interaction with decision aids and for developing frameworks of trust in decision support systems.
描述了两项考察操作员对决策辅助工具看法的实验。
研究表明,存在某些针对自动化的偏见,这些偏见会影响人与自动化的交互。我们区分了先入为主的偏见和事后偏见,并考察了它们对建议接受度的影响。
在研究1中,我们根据操作员与不同出身(专家与新手)的自动化和人类顾问交互之前的书面描述,考察了操作员对他们的信任以及感知到的可靠性。在研究2中,我们考察了参与者在行李安检任务中客观体验了顾问的可靠性(90%对70%)之后,对这些顾问的事后信任、感知到的可靠性以及依赖程度。
在研究1中,感知可靠性的测量表明,在不同出身中,自动化被认为比人类更可靠。信任度测量表明,自动化“新手”比人类“新手”更受信任;人类“专家”比自动化“专家”更受信任。在研究2中,感知到的可靠性因出身而异,而自动化的主观信任度始终高于人类。新手自动化的建议接受度总是高于新手人类。然而,当顾问的可靠性为70%时,与专家人类相比,专家自动化产生的错误导致对专家自动化的依从性/依赖度下降。
对自动化的先入为主的期望会影响这些辅助工具在实际任务中的使用。
研究结果为得出与决策辅助工具“最佳”用户交互的指标以及为开发决策支持系统中的信任框架提供了参考点。