Department of Psychology, University of Otago, William James Building, 275 Leith Walk, Dunedin 9016, New Zealand.
Longitudinal Studies Section, Translational Gerontology National Institute on Aging, National Institute of Health, Branch, 251 Bayview Boulevard, Rm 05B113A, Biomedical Research Center, Baltimore, MD 21224, USA.
Cereb Cortex. 2023 Mar 10;33(6):2682-2703. doi: 10.1093/cercor/bhac235.
Despite decades of costly research, we still cannot accurately predict individual differences in cognition from task-based functional magnetic resonance imaging (fMRI). Moreover, aiming for methods with higher prediction is not sufficient. To understand brain-cognition relationships, we need to explain how these methods draw brain information to make the prediction. Here we applied an explainable machine-learning (ML) framework to predict cognition from task-based fMRI during the n-back working-memory task, using data from the Adolescent Brain Cognitive Development (n = 3,989). We compared 9 predictive algorithms in their ability to predict 12 cognitive abilities. We found better out-of-sample prediction from ML algorithms over the mass-univariate and ordinary least squares (OLS) multiple regression. Among ML algorithms, Elastic Net, a linear and additive algorithm, performed either similar to or better than nonlinear and interactive algorithms. We explained how these algorithms drew information, using SHapley Additive explanation, eNetXplorer, Accumulated Local Effects, and Friedman's H-statistic. These explainers demonstrated benefits of ML over the OLS multiple regression. For example, ML provided some consistency in variable importance with a previous study and consistency with the mass-univariate approach in the directionality of brain-cognition relationships at different regions. Accordingly, our explainable-ML framework predicted cognition from task-based fMRI with boosted prediction and explainability over standard methodologies.
尽管经过几十年的昂贵研究,我们仍然无法准确地从基于任务的功能磁共振成像 (fMRI) 预测个体认知差异。此外,仅仅追求更高的预测方法是不够的。为了理解大脑与认知的关系,我们需要解释这些方法如何利用大脑信息进行预测。在这里,我们应用了一种可解释的机器学习 (ML) 框架,使用来自青少年大脑认知发展 (n = 3989) 的数据,从基于任务的 fMRI 预测认知。我们比较了 9 种预测算法在预测 12 种认知能力方面的能力。我们发现,ML 算法在样本外预测方面的表现优于多元回归和普通最小二乘法 (OLS)。在 ML 算法中,Elastic Net(一种线性和加性算法)的表现与非线性和交互算法相似,甚至更好。我们使用 SHapley Additive explanation、eNetXplorer、Accumulated Local Effects 和 Friedman 的 H-statistic 来解释这些算法如何提取信息。这些解释器展示了 ML 相对于 OLS 多元回归的优势。例如,ML 在变量重要性方面提供了与之前研究的一些一致性,并且在不同区域的大脑-认知关系的方向性方面与多元回归方法一致。因此,我们的可解释 ML 框架通过提高预测能力和解释能力,从基于任务的 fMRI 预测认知,优于标准方法。