University of Pittsburgh, USA.
Hum Factors. 2024 Dec;66(12):2606-2620. doi: 10.1177/00187208241234810. Epub 2024 Mar 4.
The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction.
System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics.
We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated.
Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities.
The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects.
These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.
本研究旨在通过自动生成和可视化置信度和解释,并评估其对人机交互中性能、信任、偏好和眼动行为的影响,来提高自主系统的透明度。
系统透明度对于维持适当的信任水平和任务成功至关重要。之前的研究对于显示可能性信息和解释的影响呈现出混合结果,并且往往依赖于手工创建的信息,限制了可扩展性,并且无法解决现实世界中的动态问题。
我们进行了一项涉及 42 名大学生的双任务实验,他们在智能探测器的协助下操作模拟监视测试平台。该研究采用了 2(置信度可视化:是与否)×3(视觉解释:无、边界框、边界框和关键点)混合设计。评估了任务性能、人类信任、对智能探测器的偏好以及眼动行为。
当不显示置信度时,使用边界框和关键点的视觉解释提高了检测任务的性能。同时,无论解释类型如何,视觉解释都增强了对智能探测器的信任和偏好。置信度可视化并未影响人类对智能探测器的信任和偏好。此外,两种视觉信息都降低了眼跳速度。
该研究表明,视觉解释可以在不进行置信度可视化的情况下,通过改变搜索策略,提高人机交互中的性能、信任和偏好。然而,过多的信息可能会产生不利影响。
这些发现为透明自动化的设计提供了指导,强调了针对具体情境和以用户为中心的解释的重要性,以促进有效的人机协作。