Diagnostic and Research Center for Molecular BioMedicine, Diagnostic & Research Institute of Pathology, Medical University of Graz, Neue Stiftingtalstrasse 6, 8010 Graz, Austria.
Institute for Medical Informatics, Statistics and Documentation, Medical University of Graz, Auenbruggerplatz 2, 8036 Graz, Austria; Alberta Machine Intelligence Institute, University of Alberta, 2-21 Athabasca Hall, AB T6G, Canada.
N Biotechnol. 2022 Sep 25;70:67-72. doi: 10.1016/j.nbt.2022.05.002. Epub 2022 May 6.
Artificial Intelligence (AI) for the biomedical domain is gaining significant interest and holds considerable potential for the future of healthcare, particularly also in the context of in vitro diagnostics. The European In Vitro Diagnostic Medical Device Regulation (IVDR) explicitly includes software in its requirements. This poses major challenges for In Vitro Diagnostic devices (IVDs) that involve Machine Learning (ML) algorithms for data analysis and decision support. This can increase the difficulty of applying some of the most successful ML and Deep Learning (DL) methods to the biomedical domain, just by missing the required explanatory components from the manufacturers. In this context, trustworthy AI has to empower biomedical professionals to take responsibility for their decision-making, which clearly raises the need for explainable AI methods. Explainable AI, such as layer-wise relevance propagation, can help in highlighting the relevant parts of inputs to, and representations in, a neural network that caused a result and visualize these relevant parts. In the same way that usability encompasses measurements for the quality of use, the concept of causability encompasses measurements for the quality of explanations produced by explainable AI methods. This paper describes both concepts and gives examples of how explainability and causability are essential in order to demonstrate scientific validity as well as analytical and clinical performance for future AI-based IVDs.
人工智能(AI)在生物医学领域引起了极大的关注,并且在未来的医疗保健中具有相当大的潜力,尤其是在体外诊断领域。欧洲体外诊断医疗器械法规(IVDR)明确将软件纳入其要求范围。这对涉及用于数据分析和决策支持的机器学习(ML)算法的体外诊断设备(IVD)提出了重大挑战。由于制造商缺少所需的解释组件,这可能会增加一些最成功的机器学习和深度学习(DL)方法在生物医学领域应用的难度。在这种情况下,值得信赖的 AI 必须使生物医学专业人员能够对自己的决策负责,这显然需要可解释的 AI 方法。可解释的 AI,如逐层相关性传播,可以帮助突出导致结果的神经网络输入和表示中的相关部分,并可视化这些相关部分。就像可用性涵盖了使用质量的度量一样,可归因性的概念涵盖了可解释的 AI 方法生成的解释质量的度量。本文描述了这两个概念,并举例说明了可解释性和可归因性对于证明未来基于 AI 的 IVD 的科学有效性以及分析和临床性能的重要性。