Lim Cherin, Prendez David, Boyle Linda Ng, Rajivan Prashanth
University of Washington, USA.
New York University, USA.
Hum Factors. 2025 May;67(5):485-502. doi: 10.1177/00187208241283321. Epub 2024 Sep 18.
ObjectiveThis study examines the extent to which cybersecurity attacks on autonomous vehicles (AVs) affect human trust dynamics and driver behavior.BackgroundHuman trust is critical for the adoption and continued use of AVs. A pressing concern in this context is the persistent threat of cyberattacks, which pose a formidable threat to the secure operations of AVs and consequently, human trust.MethodA driving simulator experiment was conducted with 40 participants who were randomly assigned to one of two groups: (1) Experience and Feedback and (2) Experience-Only. All participants experienced three drives: Baseline, Attack, and Post-Attack Drive. The Attack Drive prevented participants from properly operating the vehicle in multiple incidences. Only the "Experience and Feedback" group received a security update in the Post-Attack drive, which was related to the mitigation of the vehicle's vulnerability. Trust and foot positions were recorded for each drive.ResultsFindings suggest that attacks on AVs significantly degrade human trust, and remains degraded even after an error-less drive. Providing an update about the mitigation of the vulnerability did not significantly affect trust repair.ConclusionTrust toward AVs should be analyzed as an emergent and dynamic construct that requires autonomous systems capable of calibrating trust after malicious attacks through appropriate experience and interaction design.ApplicationThe results of this study can be applied when building driver and situation-adaptive AI systems within AVs.
目的
本研究考察了对自动驾驶汽车(AV)的网络安全攻击在多大程度上影响人类信任动态和驾驶员行为。
背景
人类信任对于自动驾驶汽车的采用和持续使用至关重要。在这种情况下,一个紧迫的问题是网络攻击的持续威胁,这对自动驾驶汽车的安全运行以及人类信任构成了巨大威胁。
方法
对40名参与者进行了驾驶模拟器实验,他们被随机分配到两个组中的一组:(1)体验与反馈组和(2)仅体验组。所有参与者都经历了三次驾驶:基线驾驶、攻击驾驶和攻击后驾驶。攻击驾驶在多个情况下阻止参与者正常操作车辆。只有“体验与反馈”组在攻击后驾驶中收到了与减轻车辆漏洞相关的安全更新。记录每次驾驶的信任度和脚部位置。
结果
研究结果表明,对自动驾驶汽车的攻击会显著降低人类信任,即使在无错误驾驶后仍会持续降低。提供关于减轻漏洞的更新对信任修复没有显著影响。
结论
对自动驾驶汽车信任应作为一种新兴的动态结构来分析,这需要通过适当的体验和交互设计,使自主系统能够在遭受恶意攻击后校准信任。
应用
本研究结果可应用于在自动驾驶汽车中构建驾驶员和情境自适应人工智能系统时。