School of Economics, University of New South Wales.
Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers, University of Melbourne.
Psychol Methods. 2024 Feb;29(1):219-241. doi: 10.1037/met0000458. Epub 2022 Apr 21.
Model comparison is the cornerstone of theoretical progress in psychological research. Common practice overwhelmingly relies on tools that evaluate competing models by balancing in-sample descriptive adequacy against model flexibility, with modern approaches advocating the use of marginal likelihood for hierarchical cognitive models. Cross-validation is another popular approach but its implementation remains out of reach for cognitive models evaluated in a Bayesian hierarchical framework, with the major hurdle being its prohibitive computational cost. To address this issue, we develop novel algorithms that make variational Bayes (VB) inference for hierarchical models feasible and computationally efficient for complex cognitive models of substantive theoretical interest. It is well known that VB produces good estimates of the first moments of the parameters, which gives good predictive densities estimates. We thus develop a novel VB algorithm with Bayesian prediction as a tool to perform model comparison by cross-validation, which we refer to as CVVB. In particular, CVVB can be used as a model screening device that quickly identifies bad models. We demonstrate the utility of CVVB by revisiting a classic question in decision making research: what latent components of processing drive the ubiquitous speed-accuracy tradeoff? We demonstrate that CVVB strongly agrees with model comparison via marginal likelihood, yet achieves the outcome in much less time. Our approach brings cross-validation within reach of theoretically important psychological models, making it feasible to compare much larger families of hierarchically specified cognitive models than has previously been possible. To enhance the applicability of the algorithm, we provide Matlab code together with a user manual so users can easily implement VB and/or CVVB for the models considered in this article and their variants. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
模型比较是心理学研究理论进展的基石。常见的做法压倒性地依赖于通过平衡样本内描述性充分性和模型灵活性来评估竞争模型的工具,而现代方法提倡使用边缘似然来评估分层认知模型。交叉验证是另一种流行的方法,但对于在贝叶斯分层框架中评估的认知模型,其实施仍然遥不可及,主要障碍是其计算成本过高。为了解决这个问题,我们开发了新的算法,使分层模型的变分贝叶斯(VB)推断在计算上可行且高效,适用于具有实质性理论意义的复杂认知模型。众所周知,VB 可以很好地估计参数的一阶矩,从而得到很好的预测密度估计。因此,我们开发了一种新的 VB 算法,将贝叶斯预测作为一种工具,通过交叉验证进行模型比较,我们称之为 CVVB。特别是,CVVB 可以用作模型筛选工具,快速识别不良模型。我们通过重新研究决策研究中的一个经典问题来证明 CVVB 的实用性:是什么潜在的加工成分驱动着普遍存在的速度-准确性权衡?我们证明 CVVB 与边缘似然的模型比较高度一致,但在更短的时间内得出了结果。我们的方法使交叉验证能够应用于理论上重要的心理模型,使得比较分层指定的认知模型的更大家族变得可行,而这在以前是不可能的。为了增强算法的适用性,我们提供了 Matlab 代码和用户手册,以便用户可以轻松地为本文和变体中考虑的模型实现 VB 和/或 CVVB。(PsycInfo 数据库记录(c)2024 APA,保留所有权利)。