Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK.
Dementia Research Centre, Queen Square Institute of Neurology, University College London, London, UK.
Alzheimers Dement. 2023 May;19(5):2135-2149. doi: 10.1002/alz.12948. Epub 2023 Feb 3.
Machine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge is building robust and generalizable models that generate decisions that can be reliably explained. Some models are designed to be inherently "interpretable," whereas post hoc "explainability" methods can be used for other models.
Here we sought to summarize the state-of-the-art of interpretable machine learning for dementia.
We identified 92 studies using PubMed, Web of Science, and Scopus. Studies demonstrate promising classification performance but vary in their validation procedures and reporting standards and rely heavily on popular data sets.
Future work should incorporate clinicians to validate explanation methods and make conclusive inferences about dementia-related disease pathology. Critically analyzing model explanations also requires an understanding of the interpretability methods itself. Patient-specific explanations are also required to demonstrate the benefit of interpretable machine learning in clinical practice.
机器学习在自动化痴呆症诊断方面的研究越来越受欢迎,但到目前为止,其临床影响有限。一个关键的挑战是构建强大且可推广的模型,这些模型生成的决策可以被可靠地解释。一些模型旨在具有内在的“可解释性”,而事后的“可解释性”方法可用于其他模型。
在这里,我们试图总结用于痴呆症的可解释机器学习的最新进展。
我们使用 PubMed、Web of Science 和 Scopus 确定了 92 项研究。这些研究表明分类性能有很大的提升,但在验证程序和报告标准上存在差异,并且严重依赖于流行的数据集。
未来的工作应该纳入临床医生来验证解释方法,并对与痴呆症相关的疾病病理做出明确的推断。批判性地分析模型解释也需要对解释方法本身有一个理解。还需要针对特定患者的解释,以证明在临床实践中使用可解释机器学习的好处。