Choo Jaegul, Liu Shixia
IEEE Comput Graph Appl. 2018 Jul/Aug;38(4):84-92. doi: 10.1109/MCG.2018.042731661.
Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even with such unprecedented advancements, the lack of explanation regarding the decisions made by deep learning models and absence of control over their internal processes act as major drawbacks in critical decision-making processes, such as precision medicine and law enforcement. In response, efforts are being made to make deep learning interpretable and controllable by humans. This article reviews visual analytics, information visualization, and machine learning perspectives relevant to this aim, and discusses potential challenges and future research directions.
最近,深度学习已将人工智能的技术水平提升到了一个新高度,人类比以往任何时候都更依赖人工智能技术。然而,即便取得了如此前所未有的进展,但深度学习模型决策缺乏解释以及对其内部过程缺乏控制,在诸如精准医疗和执法等关键决策过程中成为了主要弊端。作为回应,人们正在努力使深度学习能够被人类解释和控制。本文回顾了与这一目标相关的视觉分析、信息可视化和机器学习观点,并讨论了潜在挑战和未来研究方向。