Suppr超能文献

用于可解释深度学习的可视化分析

Visual Analytics for Explainable Deep Learning.

作者信息

Choo Jaegul, Liu Shixia

出版信息

IEEE Comput Graph Appl. 2018 Jul/Aug;38(4):84-92. doi: 10.1109/MCG.2018.042731661.

Abstract

Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even with such unprecedented advancements, the lack of explanation regarding the decisions made by deep learning models and absence of control over their internal processes act as major drawbacks in critical decision-making processes, such as precision medicine and law enforcement. In response, efforts are being made to make deep learning interpretable and controllable by humans. This article reviews visual analytics, information visualization, and machine learning perspectives relevant to this aim, and discusses potential challenges and future research directions.

摘要

最近,深度学习已将人工智能的技术水平提升到了一个新高度,人类比以往任何时候都更依赖人工智能技术。然而,即便取得了如此前所未有的进展,但深度学习模型决策缺乏解释以及对其内部过程缺乏控制,在诸如精准医疗和执法等关键决策过程中成为了主要弊端。作为回应,人们正在努力使深度学习能够被人类解释和控制。本文回顾了与这一目标相关的视觉分析、信息可视化和机器学习观点,并讨论了潜在挑战和未来研究方向。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验