Multilingual Speech Technologies Group, North-West University, Vanderbijlpark, South Africa.
Neural Comput. 2011 Jul;23(7):1899-909. doi: 10.1162/NECO_a_00137. Epub 2011 Apr 14.
We discuss the no-free-lunch NFL theorem for supervised learning as a logical paradox--that is, as a counterintuitive result that is correctly proven from apparently incontestable assumptions. We show that the uniform prior that is used in the proof of the theorem has a number of unpalatable consequences besides the NFL theorem, and propose a simple definition of determination (by a learning set of given size) that casts additional suspicion on the utility of this assumption for the prior. Whereas others have suggested that the assumptions of the NFL theorem are not practically realistic, we show these assumptions to be at odds with supervised learning in principle. This analysis suggests a route toward the establishment of a more realistic prior probability for use in the extended Bayesian framework.
我们将讨论有监督学习中的无免费午餐定理(NFL 定理),将其视为一个逻辑悖论——即一个违反直觉的结果,但是可以从明显无可争议的假设中正确证明。我们表明,在证明定理时使用的均匀先验分布除了 NFL 定理之外,还有许多令人不快的后果,并提出了一个简单的确定定义(由给定大小的学习集定义),这使得对该假设的先验的效用产生了更多的怀疑。虽然其他人认为 NFL 定理的假设在实践中不切实际,但我们表明这些假设原则上与有监督学习不一致。这种分析为在扩展的贝叶斯框架中建立更现实的先验概率提供了一条途径。