Teso Stefano, Alkan Öznur, Stammer Wolfgang, Daly Elizabeth
CIMeC and DISI, University of Trento, Trento, Italy.
Optum, Dublin, Ireland.
Front Artif Intell. 2023 Feb 23;6:1066049. doi: 10.3389/frai.2023.1066049. eCollection 2023.
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model. However, explanations can go beyond this one way communication as a mechanism to elicit user control, because once users understand, they can then provide feedback. The goal of this paper is to present an overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones. To this end, we draw a conceptual map of the state-of-the-art, grouping relevant approaches based on their intended purpose and on how they structure the interaction, highlighting similarities and differences between them. We also discuss open research issues and outline possible directions forward, with the hope of spurring further research on this blooming research topic.
为了提高模型的透明度并让用户形成对训练好的机器学习(ML)模型的心智模型,解释在人工智能和机器学习社区中受到了越来越多的关注。然而,解释可以超越这种单向沟通,成为一种引发用户控制的机制,因为一旦用户理解了,他们就可以提供反馈。本文的目标是概述将解释与交互能力相结合的研究,作为从头学习新模型以及编辑和调试现有模型的一种手段。为此,我们绘制了一幅当前技术水平的概念图,根据相关方法的预期目的以及它们构建交互的方式对其进行分组,突出它们之间的异同。我们还讨论了开放的研究问题并概述了可能的前进方向,希望能激发对这个蓬勃发展的研究主题的进一步研究。