Suppr超能文献

理解自动驾驶中的信任校准:时间、人格和系统警告设计的影响。

Understanding trust calibration in automated driving: the effect of time, personality, and system warning design.

机构信息

School of Economics and Management, Beihang University, Beijing, P. R. China.

出版信息

Ergonomics. 2023 Dec;66(12):2165-2181. doi: 10.1080/00140139.2023.2191907. Epub 2023 Mar 29.

Abstract

Under the human-automation codriving future, dynamic trust should be considered. This paper explored how trust changes over time and how multiple factors (time, trust propensity, neuroticism, and takeover warning design) calibrate trust together. We launched two driving simulator experiments to measure drivers' trust before, during, and after the experiment under takeover scenarios. The results showed that trust in automation increased during short-term interactions and dropped after four months, which is still higher than pre-experiment trust. Initial trust and trust propensity had a stable impact on trust. Drivers trusted the system more with the two-stage (MR + TOR) warning design than the one-stage (TOR). Neuroticism had a significant effect on the countdown compared with the content warning. The results provide new data and knowledge for trust calibration in the takeover scenario. The findings can help design a more reasonable automated driving system in long-term human-automation interactions.

摘要

在人机协同驾驶的未来,应该考虑动态信任。本文探讨了信任如何随时间变化,以及多种因素(时间、信任倾向、神经质和接管警告设计)如何共同校准信任。我们进行了两项驾驶模拟器实验,以在接管场景下测量驾驶员在实验前、实验中和实验后的信任。结果表明,在短期交互中,自动化的信任度增加,而在四个月后下降,但仍高于实验前的信任度。初始信任和信任倾向对信任有稳定的影响。与单阶段(TOR)警告设计相比,具有两阶段(MR+TOR)警告设计的驾驶员对系统的信任度更高。与内容警告相比,神经质对倒计时有显著影响。研究结果为接管场景中的信任校准提供了新的数据和知识。这些发现可以帮助在人机长期交互中设计更合理的自动化驾驶系统。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验