Hoff Kevin Anthony, Bashir Masooda
University of Illinois at Urbana-Champaign.
University of Illinois at Urbana-Champaign
Hum Factors. 2015 May;57(3):407-34. doi: 10.1177/0018720814547570. Epub 2014 Sep 2.
We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge.
Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators' trust.
We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review.
Our analysis revealed three layers of variability in human-automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust.
Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust.
我们系统地回顾了近期关于影响对自动化信任的因素的实证研究,以呈现一个综合现有知识的三层信任模型。
现有的许多关于指导人机自动化交互因素的研究都围绕信任展开,信任这一变量常常决定人类操作员依赖自动化的意愿。研究在各种不同的实验范式中使用了多种不同的自动化系统,以确定影响操作员信任的因素。
我们对2002年1月至2013年6月间关于自动化信任的实证研究进行了系统回顾。只有那些报告了人类受试者实验结果的论文才被视为合格,在这些实验中,人类与自动化系统进行交互以实现一个目标。此外,必须测量信任(或与信任相关的行为)与另一个变量之间的关系。该综述共纳入了101篇论文,包含127项合格研究。
我们的分析揭示了人机自动化信任中的三层变异性(性格信任、情境信任和习得信任),我们将其组织成一个模型。我们提出了创建可信自动化的设计建议,并确定了可能影响信任与依赖之间关系强度的环境条件。还针对每一层信任讨论了未来的研究方向。
我们的三层信任模型为概念化自动化信任的变异性提供了一个新视角。其结构可用于帮助指导未来的研究,并开发鼓励适当信任的培训干预措施和设计程序。