Suppr超能文献

结果探索器:用于可解释算法决策的因果关系引导交互式可视化界面。

Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making.

作者信息

Hoque Md Naimul, Mueller Klaus

出版信息

IEEE Trans Vis Comput Graph. 2022 Dec;28(12):4728-4740. doi: 10.1109/TVCG.2021.3102051. Epub 2022 Oct 26.

Abstract

The widespread adoption of algorithmic decision-making systems has brought about the necessity to interpret the reasoning behind these decisions. The majority of these systems are complex black box models, and auxiliary models are often used to approximate and then explain their behavior. However, recent research suggests that such explanations are not overly accessible to lay users with no specific expertise in machine learning and this can lead to an incorrect interpretation of the underlying model. In this article, we show that a predictive and interactive model based on causality is inherently interpretable, does not require any auxiliary model, and allows both expert and non-expert users to understand the model comprehensively. To demonstrate our method we developed Outcome Explorer, a causality guided interactive interface, and evaluated it by conducting think-aloud sessions with three expert users and a user study with 18 non-expert users. All three expert users found our tool to be comprehensive in supporting their explanation needs while the non-expert users were able to understand the inner workings of a model easily.

摘要

算法决策系统的广泛采用使得有必要解读这些决策背后的推理过程。这些系统大多是复杂的黑箱模型,通常会使用辅助模型来近似并解释其行为。然而,最近的研究表明,对于没有机器学习专业知识的普通用户来说,这样的解释并非完全易懂,这可能导致对基础模型的错误解读。在本文中,我们表明基于因果关系的预测性交互式模型本质上是可解释的,不需要任何辅助模型,并且允许专家和非专家用户全面理解该模型。为了演示我们的方法,我们开发了结果探索器(Outcome Explorer),这是一个由因果关系引导的交互式界面,并通过与三位专家用户进行出声思考会话以及对18位非专家用户进行用户研究来对其进行评估。所有三位专家用户都发现我们的工具在满足他们的解释需求方面非常全面,而非专家用户能够轻松理解模型的内部运作。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验