Suppr超能文献

人类对自动化信任的数学模型综述。

A review of mathematical models of human trust in automation.

作者信息

Rodriguez Rodriguez Lucero, Bustamante Orellana Carlos E, Chiou Erin K, Huang Lixiao, Cooke Nancy, Kang Yun

机构信息

Simon A. Levin Mathematical and Computational Modeling Sciences Center, Arizona State University, Tempe, AZ, United States.

Human Systems Engineering, Arizona State University, Mesa, AZ, United States.

出版信息

Front Neuroergon. 2023 Jun 13;4:1171403. doi: 10.3389/fnrgo.2023.1171403. eCollection 2023.

Abstract

Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data.

摘要

了解人们如何信任自主系统对于在人机协作中实现更好的性能和安全性至关重要。对自动化的信任是一个丰富而复杂的过程,这催生了许多旨在理解和检验它的措施和方法。尽管几十年来研究人员一直在开发用于理解自动化信任动态的模型,但这些模型主要是概念性的,并且往往涉及难以测量的组件。数学模型已成为获取有关自动化信任动态过程深刻见解的有力工具。本文概述了各种数学建模方法、它们的局限性、可行性以及在人机交互环境中信任动态的通用性。此外,本研究提出了一种新颖的动态方法来对自动化信任进行建模,强调将不同时间尺度纳入可测量组件的重要性。由于对自动化信任的复杂性质,还建议将机器学习和动态建模方法相结合,并纳入生理数据。

相似文献

1
A review of mathematical models of human trust in automation.
Front Neuroergon. 2023 Jun 13;4:1171403. doi: 10.3389/fnrgo.2023.1171403. eCollection 2023.
2
Trusting Automation: Designing for Responsivity and Resilience.
Hum Factors. 2023 Feb;65(1):137-165. doi: 10.1177/00187208211009995. Epub 2021 Apr 27.
3
From 'automation' to 'autonomy': the importance of trust repair in human-machine interaction.
Ergonomics. 2018 Oct;61(10):1409-1427. doi: 10.1080/00140139.2018.1457725. Epub 2018 Apr 9.
5
Adaptive Cognitive Mechanisms to Maintain Calibrated Trust and Reliance in Automation.
Front Robot AI. 2021 May 24;8:652776. doi: 10.3389/frobt.2021.652776. eCollection 2021.
7
The Impact of Training on Human-Autonomy Team Communications and Trust Calibration.
Hum Factors. 2023 Nov;65(7):1554-1570. doi: 10.1177/00187208211047323. Epub 2021 Oct 1.
8
Trust in automation: designing for appropriate reliance.
Hum Factors. 2004 Spring;46(1):50-80. doi: 10.1518/hfes.46.1.50_30392.
10
Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With Automation.
Hum Factors. 2023 Aug;65(5):862-878. doi: 10.1177/00187208211034716. Epub 2021 Aug 29.

本文引用的文献

1
Trust in Robots: Challenges and Opportunities.
Curr Robot Rep. 2020;1(4):297-309. doi: 10.1007/s43154-020-00029-y. Epub 2020 Sep 3.
2
Measurement of Trust in Automation: A Narrative Review and Reference Guide.
Front Psychol. 2021 Oct 19;12:604977. doi: 10.3389/fpsyg.2021.604977. eCollection 2021.
3
Trusting Automation: Designing for Responsivity and Resilience.
Hum Factors. 2023 Feb;65(1):137-165. doi: 10.1177/00187208211009995. Epub 2021 Apr 27.
4
Exploring Trust in Self-Driving Vehicles Through Text Analysis.
Hum Factors. 2020 Mar;62(2):260-277. doi: 10.1177/0018720819872672. Epub 2019 Sep 10.
5
The Relationship Between Trust and Use Choice in Human-Robot Interaction.
Hum Factors. 2019 Jun;61(4):614-626. doi: 10.1177/0018720818816838. Epub 2019 Jan 2.
6
Trust in automation: integrating empirical evidence on factors that influence trust.
Hum Factors. 2015 May;57(3):407-34. doi: 10.1177/0018720814547570. Epub 2014 Sep 2.
7
Review of a pivotal Human Factors article: "Humans and automation: use, misuse, disuse, abuse".
Hum Factors. 2008 Jun;50(3):404-10. doi: 10.1518/001872008X288547.
8
Automation failures on tasks easily performed by operators undermine trust in automated aids.
Hum Factors. 2006 Summer;48(2):241-56. doi: 10.1518/001872006777724408.
9
Trust in automation: designing for appropriate reliance.
Hum Factors. 2004 Spring;46(1):50-80. doi: 10.1518/hfes.46.1.50_30392.
10
Trust and distrust in organizations: emerging perspectives, enduring questions.
Annu Rev Psychol. 1999;50:569-98. doi: 10.1146/annurev.psych.50.1.569.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验