Bruckert Sebastian, Finzel Bettina, Schmid Ute
Cognitive Systems, University of Bamberg, Bamberg, Germany.
Front Artif Intell. 2020 Sep 24;3:507973. doi: 10.3389/frai.2020.507973. eCollection 2020.
Increasing quality and performance of artificial intelligence (AI) in general and machine learning (ML) in particular is followed by a wider use of these approaches in everyday life. As part of this development, ML classifiers have also gained more importance for diagnosing diseases within biomedical engineering and medical sciences. However, many of those ubiquitous high-performing ML algorithms reveal a black-box-nature, leading to opaque and incomprehensible systems that complicate human interpretations of single predictions or the whole prediction process. This puts up a serious challenge on human decision makers to develop trust, which is much needed in life-changing decision tasks. This paper is designed to answer the question how expert companion systems for decision support can be designed to be interpretable and therefore transparent and comprehensible for humans. On the other hand, an approach for interactive ML as well as human-in-the-loop-learning is demonstrated in order to integrate human expert knowledge into ML models so that humans and machines act as companions within a critical decision task. We especially address the problem of between ML classifiers and its human users as a prerequisite for semantically relevant and useful explanations as well as interactions. Our roadmap paper presents and discusses an interdisciplinary yet integrated Comprehensible Artificial Intelligence (cAI)-transition-framework with regard to the task of medical diagnosis. We explain and integrate relevant concepts and research areas to provide the reader with a for achieving the transition from opaque black-box models to interactive, transparent, comprehensible and trustworthy systems. To make our approach tangible, we present suitable state of the art methods with regard to the medical domain and include a realization concept of our framework. The emphasis is on the concept of Mutual Explanations (ME) that we introduce as a dialog-based, incremental process in order to provide human ML users with trust, but also with stronger participation within the learning process.
一般而言,人工智能(AI)尤其是机器学习(ML)的质量和性能不断提高,随之而来的是这些方法在日常生活中的更广泛应用。作为这一发展的一部分,ML分类器在生物医学工程和医学科学领域的疾病诊断中也变得更加重要。然而,许多无处不在的高性能ML算法都具有黑箱性质,导致系统不透明且难以理解,这使得人类对单个预测或整个预测过程的解释变得复杂。这给人类决策者建立信任带来了严峻挑战,而在改变生活的决策任务中,信任是非常必要的。本文旨在回答如何设计可解释的专家决策支持系统,从而使其对人类来说是透明且可理解的这一问题。另一方面,展示了一种交互式ML以及人在回路学习的方法,以便将人类专家知识整合到ML模型中,使人类和机器在关键决策任务中作为伙伴发挥作用。我们特别将ML分类器与其人类用户之间的问题作为语义相关且有用的解释以及交互的先决条件来处理。我们的路线图论文针对医学诊断任务提出并讨论了一个跨学科但集成的可理解人工智能(cAI)过渡框架。我们解释并整合相关概念和研究领域,为读者提供一个从不透明的黑箱模型过渡到交互式、透明、可理解且值得信赖的系统的路线图。为了使我们的方法切实可行,我们介绍了医学领域的适用的现有方法,并包括我们框架的实现概念。重点是我们引入的相互解释(ME)概念,它是一个基于对话的增量过程,目的是为人类ML用户提供信任,同时也让他们在学习过程中有更强的参与感。