Holzinger Andreas
Human-Centered AI Lab, Institute for Medical Informatics & Statistics, Medical University Graz, Graz, Austria.
xAI Lab, Alberta Machine Intelligence Institute, Edmonton, Canada.
I Com (Berl). 2021 Jan 26;19(3):171-179. doi: 10.1515/icom-2020-0024. Epub 2021 Jan 15.
Progress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex "black-boxes", which make it hard to understand a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask "what-if" questions (counterfactuals) to gain insight into the underlying explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result.
统计机器学习的进展使医学领域的人工智能取得了成功,在某些分类任务中甚至超越了人类水平。然而,相关性并不等同于因果关系,成功的模型通常是复杂的“黑匣子”,这使得很难理解结果是如何达成的。可解释人工智能(xAI)社区开发了一些方法,例如突出显示哪些输入参数与结果相关;然而,在医学领域,需要的是可归因性:就像可用性包含对使用质量的度量一样,可归因性包含对xAI产生的解释质量的度量。未来人机交互界面的关键在于将可解释性与可归因性联系起来,并允许领域专家提出问题,以理解人工智能为何得出某个结果,还能提出“如果……会怎样”的问题(反事实问题),从而深入了解结果背后的解释因素。多模态可归因性在医学领域很重要,因为通常不同的模态会对结果产生影响。