Suppr超能文献

评估不确定性可视化对模型信赖度的影响。

Evaluating the Impact of Uncertainty Visualization on Model Reliance.

作者信息

Zhao Jieqiong, Wang Yixuan, Mancenido Michelle V, Chiou Erin K, Maciejewski Ross

出版信息

IEEE Trans Vis Comput Graph. 2024 Jul;30(7):4093-4107. doi: 10.1109/TVCG.2023.3251950. Epub 2024 Jun 27.

Abstract

Machine learning models have gained traction as decision support tools for tasks that require processing copious amounts of data. However, to achieve the primary benefits of automating this part of decision-making, people must be able to trust the machine learning model's outputs. In order to enhance people's trust and promote appropriate reliance on the model, visualization techniques such as interactive model steering, performance analysis, model comparison, and uncertainty visualization have been proposed. In this study, we tested the effects of two uncertainty visualization techniques in a college admissions forecasting task, under two task difficulty levels, using Amazon's Mechanical Turk platform. Results show that (1) people's reliance on the model depends on the task difficulty and level of machine uncertainty and (2) ordinal forms of expressing model uncertainty are more likely to calibrate model usage behavior. These outcomes emphasize that reliance on decision support tools can depend on the cognitive accessibility of the visualization technique and perceptions of model performance and task difficulty.

摘要

机器学习模型作为决策支持工具,在需要处理大量数据的任务中越来越受到关注。然而,为了实现决策自动化这一部分的主要好处,人们必须能够信任机器学习模型的输出。为了增强人们的信任并促进对模型的适当依赖,已经提出了诸如交互式模型引导、性能分析、模型比较和不确定性可视化等可视化技术。在本研究中,我们使用亚马逊的Mechanical Turk平台,在两种任务难度水平下,测试了两种不确定性可视化技术在大学招生预测任务中的效果。结果表明:(1)人们对模型的依赖程度取决于任务难度和机器不确定性水平;(2)以序数形式表达模型不确定性更有可能校准模型使用行为。这些结果强调,对决策支持工具的依赖可能取决于可视化技术的认知可及性以及对模型性能和任务难度的认知。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验