• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

并非越多越好:人工智能生成的信心和解释对人机交互的影响。

More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction.

机构信息

University of Pittsburgh, USA.

出版信息

Hum Factors. 2024 Dec;66(12):2606-2620. doi: 10.1177/00187208241234810. Epub 2024 Mar 4.

DOI:10.1177/00187208241234810
PMID:38437598
Abstract

OBJECTIVE

The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction.

BACKGROUND

System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics.

METHOD

We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated.

RESULTS

Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities.

CONCLUSION

The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects.

APPLICATION

These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.

摘要

目的

本研究旨在通过自动生成和可视化置信度和解释,并评估其对人机交互中性能、信任、偏好和眼动行为的影响,来提高自主系统的透明度。

背景

系统透明度对于维持适当的信任水平和任务成功至关重要。之前的研究对于显示可能性信息和解释的影响呈现出混合结果,并且往往依赖于手工创建的信息,限制了可扩展性,并且无法解决现实世界中的动态问题。

方法

我们进行了一项涉及 42 名大学生的双任务实验,他们在智能探测器的协助下操作模拟监视测试平台。该研究采用了 2(置信度可视化:是与否)×3(视觉解释:无、边界框、边界框和关键点)混合设计。评估了任务性能、人类信任、对智能探测器的偏好以及眼动行为。

结果

当不显示置信度时,使用边界框和关键点的视觉解释提高了检测任务的性能。同时,无论解释类型如何,视觉解释都增强了对智能探测器的信任和偏好。置信度可视化并未影响人类对智能探测器的信任和偏好。此外,两种视觉信息都降低了眼跳速度。

结论

该研究表明,视觉解释可以在不进行置信度可视化的情况下,通过改变搜索策略,提高人机交互中的性能、信任和偏好。然而,过多的信息可能会产生不利影响。

应用

这些发现为透明自动化的设计提供了指导,强调了针对具体情境和以用户为中心的解释的重要性,以促进有效的人机协作。

相似文献

1
More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction.并非越多越好:人工智能生成的信心和解释对人机交互的影响。
Hum Factors. 2024 Dec;66(12):2606-2620. doi: 10.1177/00187208241234810. Epub 2024 Mar 4.
2
Not All Information Is Equal: Effects of Disclosing Different Types of Likelihood Information on Trust, Compliance and Reliance, and Task Performance in Human-Automation Teaming.并非所有信息都是平等的:在人机协作中披露不同类型可能性信息对信任、遵从和依赖的影响,以及对任务绩效的影响。
Hum Factors. 2020 Sep;62(6):987-1001. doi: 10.1177/0018720819862916. Epub 2019 Jul 26.
3
Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management.多无人机/无人车管理中人类-智能体协作的智能体透明度
Hum Factors. 2016 May;58(3):401-15. doi: 10.1177/0018720815621206. Epub 2016 Feb 11.
4
Automation trust and attention allocation in multitasking workspace.多任务工作空间中的自动化信任和注意力分配。
Appl Ergon. 2018 Jul;70:194-201. doi: 10.1016/j.apergo.2018.03.008. Epub 2018 Mar 20.
5
Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions.为信心而设计:可视化人工智能决策的影响
Front Neurosci. 2022 Jun 24;16:883385. doi: 10.3389/fnins.2022.883385. eCollection 2022.
6
Human Performance Benefits of The Automation Transparency Design Principle : Validation and Variation.自动化透明度设计原则对人类绩效的益处:验证与变化
Hum Factors. 2021 May;63(3):379-401. doi: 10.1177/0018720819887252. Epub 2019 Dec 13.
7
Near-Perfect Automation: Investigating Performance, Trust, and Visual Attention Allocation.近乎完美的自动化:研究性能、信任和视觉注意力分配。
Hum Factors. 2023 Jun;65(4):546-561. doi: 10.1177/00187208211032889. Epub 2021 Aug 4.
8
Transparency improves the accuracy of automation use, but automation confidence information does not.透明度提高自动化使用的准确性,但自动化置信度信息则不然。
Cogn Res Princ Implic. 2024 Oct 8;9(1):67. doi: 10.1186/s41235-024-00599-x.
9
Enhancing safety in conditionally automated driving: Can more takeover request visual information make a difference in hazard scenarios with varied hazard visibility?提高有条件自动化驾驶的安全性:在具有不同危险可见度的危险场景中,更多接管请求视觉信息是否会产生影响?
Accid Anal Prev. 2024 Sep;205:107687. doi: 10.1016/j.aap.2024.107687. Epub 2024 Jun 28.
10
Trust and Distrust of Automated Parking in a Tesla Model X.信任与不信任的特斯拉 Model X 自动泊车。
Hum Factors. 2020 Mar;62(2):194-210. doi: 10.1177/0018720819865412. Epub 2019 Aug 16.