Suppr超能文献

基于任务的 fMRI 与认知个体差异之间关系的可解释机器学习预测与解释方法。

Explainable machine learning approach to predict and explain the relationship between task-based fMRI and individual differences in cognition.

机构信息

Department of Psychology, University of Otago, William James Building, 275 Leith Walk, Dunedin 9016, New Zealand.

Longitudinal Studies Section, Translational Gerontology National Institute on Aging, National Institute of Health, Branch, 251 Bayview Boulevard, Rm 05B113A, Biomedical Research Center, Baltimore, MD 21224, USA.

出版信息

Cereb Cortex. 2023 Mar 10;33(6):2682-2703. doi: 10.1093/cercor/bhac235.

Abstract

Despite decades of costly research, we still cannot accurately predict individual differences in cognition from task-based functional magnetic resonance imaging (fMRI). Moreover, aiming for methods with higher prediction is not sufficient. To understand brain-cognition relationships, we need to explain how these methods draw brain information to make the prediction. Here we applied an explainable machine-learning (ML) framework to predict cognition from task-based fMRI during the n-back working-memory task, using data from the Adolescent Brain Cognitive Development (n = 3,989). We compared 9 predictive algorithms in their ability to predict 12 cognitive abilities. We found better out-of-sample prediction from ML algorithms over the mass-univariate and ordinary least squares (OLS) multiple regression. Among ML algorithms, Elastic Net, a linear and additive algorithm, performed either similar to or better than nonlinear and interactive algorithms. We explained how these algorithms drew information, using SHapley Additive explanation, eNetXplorer, Accumulated Local Effects, and Friedman's H-statistic. These explainers demonstrated benefits of ML over the OLS multiple regression. For example, ML provided some consistency in variable importance with a previous study and consistency with the mass-univariate approach in the directionality of brain-cognition relationships at different regions. Accordingly, our explainable-ML framework predicted cognition from task-based fMRI with boosted prediction and explainability over standard methodologies.

摘要

尽管经过几十年的昂贵研究,我们仍然无法准确地从基于任务的功能磁共振成像 (fMRI) 预测个体认知差异。此外,仅仅追求更高的预测方法是不够的。为了理解大脑与认知的关系,我们需要解释这些方法如何利用大脑信息进行预测。在这里,我们应用了一种可解释的机器学习 (ML) 框架,使用来自青少年大脑认知发展 (n = 3989) 的数据,从基于任务的 fMRI 预测认知。我们比较了 9 种预测算法在预测 12 种认知能力方面的能力。我们发现,ML 算法在样本外预测方面的表现优于多元回归和普通最小二乘法 (OLS)。在 ML 算法中,Elastic Net(一种线性和加性算法)的表现与非线性和交互算法相似,甚至更好。我们使用 SHapley Additive explanation、eNetXplorer、Accumulated Local Effects 和 Friedman 的 H-statistic 来解释这些算法如何提取信息。这些解释器展示了 ML 相对于 OLS 多元回归的优势。例如,ML 在变量重要性方面提供了与之前研究的一些一致性,并且在不同区域的大脑-认知关系的方向性方面与多元回归方法一致。因此,我们的可解释 ML 框架通过提高预测能力和解释能力,从基于任务的 fMRI 预测认知,优于标准方法。

相似文献

引用本文的文献

4
10
The challenges and prospects of brain-based prediction of behaviour.基于大脑的行为预测的挑战与展望。
Nat Hum Behav. 2023 Aug;7(8):1255-1264. doi: 10.1038/s41562-023-01670-1. Epub 2023 Jul 31.

本文引用的文献

2
Principles and Practice of Explainable Machine Learning.可解释机器学习原理与实践
Front Big Data. 2021 Jul 1;4:688969. doi: 10.3389/fdata.2021.688969. eCollection 2021.
5
Conditional permutation importance revisited.条件排列重要性再探。
BMC Bioinformatics. 2020 Jul 14;21(1):307. doi: 10.1186/s12859-020-03622-2.
7
Behavioral and Neural Signatures of Working Memory in Childhood.儿童工作记忆的行为和神经特征。
J Neurosci. 2020 Jun 24;40(26):5090-5104. doi: 10.1523/JNEUROSCI.2841-19.2020. Epub 2020 May 25.
9
Establishment of Best Practices for Evidence for Prediction: A Review.建立最佳实践证据预测:综述。
JAMA Psychiatry. 2020 May 1;77(5):534-540. doi: 10.1001/jamapsychiatry.2019.3671.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验