Suppr超能文献

评估心理模型的复杂性和可证伪性。

Evaluating the complexity and falsifiability of psychological models.

机构信息

Department of Cognitive Sciences, University of California Irvine.

Department of Psychology, University of Texas at Austin.

出版信息

Psychol Rev. 2023 Jul;130(4):853-872. doi: 10.1037/rev0000421. Epub 2023 Mar 9.

Abstract

Understanding model complexity is important for developing useful psychological models. One way to think about model complexity is in terms of the predictions a model makes and the ability of empirical evidence to falsify those predictions. We argue that existing measures of falsifiability have important limitations and develop a new measure. KL-delta uses Kullback-Leibler divergence to compare the prior predictive distributions of models to the data prior that formalizes knowledge about the plausibility of different experimental outcomes. Using introductory conceptual examples and applications with existing models and experiments, we show that KL-delta challenges widely held scientific intuitions about model complexity and falsifiability. In a psychophysics application, we show that hierarchical models with more parameters are often more falsifiable than the original nonhierarchical model. This counters the intuition that adding parameters always makes a model more complex. In a decision-making application, we show that a choice model incorporating response determinism can be harder to falsify than its special case of probability matching. This counters the intuition that if one model is a special case of another, the special case must be less complex. In a memory recall application, we show that using informative data priors based on the serial position curve allows KL-delta to distinguish models that otherwise would be indistinguishable. This shows the value in model evaluation of extending the notion of possible falsifiability, in which all data are considered equally likely, to the more general notion of plausible falsifiability, in which some data are more likely than others. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

摘要

理解模型的复杂性对于开发有用的心理模型很重要。一种考虑模型复杂性的方法是根据模型做出的预测以及经验证据反驳这些预测的能力。我们认为现有的可证伪性衡量标准存在重要的局限性,并开发了一种新的衡量标准。KL- delta 使用 Kullback-Leibler 散度来比较模型的先验预测分布与数据先验,数据先验形式化了不同实验结果的可能性的知识。通过使用介绍性的概念示例和现有模型和实验的应用,我们表明 KL- delta 挑战了关于模型复杂性和可证伪性的广泛持有的科学直觉。在一项心理物理学应用中,我们表明具有更多参数的层次模型通常比原始非层次模型更具有可证伪性。这与添加参数总是使模型更复杂的直觉相反。在决策应用中,我们表明包含响应确定性的选择模型可能比其概率匹配的特例更难被反驳。这与如果一个模型是另一个模型的特例,则该特例必须更简单的直觉相反。在记忆召回应用中,我们表明使用基于序列位置曲线的信息性数据先验可以使 KL- delta 区分否则无法区分的模型。这表明在模型评估中扩展可能的可证伪性概念的价值,在该概念中,所有数据都被认为具有相同的可能性,到更一般的可证伪性概念,在该概念中,某些数据比其他数据更有可能。(PsycInfo 数据库记录(c)2023 APA,保留所有权利)。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验