Suppr超能文献

评估模型的可区分性和数据的信息量。

Assessing the distinguishability of models and the informativeness of data.

作者信息

Navarro Daniel J, Pitt Mark A, Myung In Jae

机构信息

Department of Psychology, Ohio State University, 1827 Neil Avenue, Columbus, OH 43210, USA.

出版信息

Cogn Psychol. 2004 Aug;49(1):47-84. doi: 10.1016/j.cogpsych.2003.11.001.

Abstract

A difficulty in the development and testing of psychological models is that they are typically evaluated solely on their ability to fit experimental data, with little consideration given to their ability to fit other possible data patterns. By examining how well model A fits data generated by model B, and vice versa (a technique that we call landscaping), much safer inferences can be made about the meaning of a model's fit to data. We demonstrate the landscaping technique using four models of retention and 77 historical data sets, and show how the method can be used to: (1) evaluate the distinguishability of models, (2) evaluate the informativeness of data in distinguishing between models, and (3) suggest new ways to distinguish between models. The generality of the method is demonstrated in two other research areas (information integration and categorization), and its relationship to the important notion of model complexity is discussed.

摘要

心理模型的开发和测试存在一个难题,即通常仅依据其拟合实验数据的能力来评估,而很少考虑其拟合其他可能数据模式的能力。通过检验模型A对模型B生成的数据的拟合程度,反之亦然(我们称之为“景观美化”的一种技术),可以对模型与数据拟合的意义做出更为可靠的推断。我们使用四个记忆模型和77个历史数据集来演示“景观美化”技术,并展示该方法如何用于:(1)评估模型的可区分性;(2)评估数据在区分模型方面的信息量;(3)提出区分模型的新方法。该方法的通用性在其他两个研究领域(信息整合和分类)中得到了证明,并讨论了其与模型复杂性这一重要概念的关系。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验