Forster MR
University of Wisconsin, Madison
J Math Psychol. 2000 Mar;44(1):205-231. doi: 10.1006/jmps.1999.1284.
What is model selection? What are the goals of model selection? What are the methods of model selection and how do they work? Which methods perform better than others and in what circumstances? These questions rest on a number of key concepts in a relatively underdeveloped field. The aim of this paper is to explain some background concepts, to highlight some of the results in this special issue, and to add my own. The standard methods of model selection include classical hypothesis testing, maximum likelihood, Bayes method, minimum description length, cross-validation, and Akaike's information criterion. They all provide an implementation of Occam's razor, in which parsimony or simplicity is balanced against goodness-of-fit. These methods primarily take account of the sampling errors in parameter estimation, although their relative success at this task depends on the circumstances. However, the aim of model selection should also include the ability of a model to generalize to predictions in a different domain. Errors of extrapolation, or generalization, are different from errors of parameter estimation. So, it seems that simplicity and parsimony may be an additional factor in managing these errors, in which case the standard methods of model selection are incomplete implementations of Occam's razor. Copyright 2000 Academic Press.
什么是模型选择?模型选择的目标是什么?模型选择的方法有哪些以及它们是如何工作的?哪些方法比其他方法表现更好以及在什么情况下?这些问题基于一个相对不发达领域中的一些关键概念。本文的目的是解释一些背景概念,突出本期特刊中的一些成果,并补充我自己的观点。模型选择的标准方法包括经典假设检验、最大似然法、贝叶斯方法、最小描述长度、交叉验证和赤池信息准则。它们都提供了奥卡姆剃刀原理的一种实现方式,即在简约性或简单性与拟合优度之间进行权衡。这些方法主要考虑参数估计中的抽样误差,尽管它们在这项任务上的相对成功取决于具体情况。然而,模型选择的目标还应包括模型对不同领域预测进行泛化的能力。外推误差或泛化误差与参数估计误差不同。所以,似乎简单性和简约性可能是管理这些误差的一个额外因素,在这种情况下,模型选择的标准方法是奥卡姆剃刀原理的不完整实现方式。版权所有2000年学术出版社。