The University of Western Australia, 35 Stirling Highway, Perth, WA, 6009, Australia.
Curtin University, Perth, Australia.
Cogn Res Princ Implic. 2024 Oct 8;9(1):67. doi: 10.1186/s41235-024-00599-x.
Increased automation transparency can improve the accuracy of automation use but can lead to increased bias towards agreeing with advice. Information about the automation's confidence in its advice may also increase the predictability of automation errors. We examined the effects of providing automation transparency, automation confidence information, and their potential interacting effect on the accuracy of automation use and other outcomes. An uninhabited vehicle (UV) management task was completed where participants selected the optimal UV to complete missions. Low or high automation transparency was provided, and participants agreed/disagreed with automated advice on each mission. We manipulated between participants whether automated advice was accompanied by confidence information. This information indicated on each trial whether automation was "somewhat" or "highly" confident in its advice. Higher transparency improved the accuracy of automation use, led to faster decisions, lower perceived workload, and increased trust and perceived usability. Providing participant automation confidence information, as compared with not, did not have an overall impact on any outcome variable and did not interact with transparency. Despite no benefit, participants who were provided confidence information did use it. For trials where lower compared to higher confidence information was presented, hit rates decreased, correct rejection rates increased, decision times slowed, and perceived workload increased, all suggestive of decreased reliance on automated advice. Such trial-by-trial shifts in automation use bias and other outcomes were not moderated by transparency. These findings can potentially inform the design of automated decision-support systems that are more understandable by humans in order to optimise human-automation interaction.
增加自动化透明度可以提高自动化使用的准确性,但可能会导致对建议的认同度增加。关于自动化对其建议的信心的信息也可能会增加自动化错误的可预测性。我们研究了提供自动化透明度、自动化置信度信息及其潜在交互效应对自动化使用准确性和其他结果的影响。参与者完成了无人车 (UV) 管理任务,他们选择了最优的 UV 来完成任务。提供低或高自动化透明度,参与者在每个任务上同意/不同意自动化建议。我们在参与者之间进行了操纵,以确定自动化建议是否附有置信度信息。此信息在每次试验中表示自动化对其建议的信心是“有些”还是“非常”高。更高的透明度提高了自动化使用的准确性,导致更快的决策、更低的感知工作负载以及增加的信任和感知可用性。与不提供相比,为参与者提供自动化置信度信息并没有对任何结果变量产生总体影响,也没有与透明度产生交互作用。尽管没有好处,但提供了置信度信息的参与者确实使用了它。与呈现更高置信度信息相比,呈现更低置信度信息的情况下,命中率下降,正确拒绝率增加,决策时间减慢,感知工作负载增加,所有这些都表明对自动化建议的依赖度降低。这种针对每个试验的自动化使用偏见和其他结果的转变不受透明度的调节。这些发现可以为设计更能被人类理解的自动化决策支持系统提供信息,以优化人机交互。