Dipartimento di Informatica e Scienze dell'Informazione, Università di Genova, 16146 Genoa, Italy.
Neural Comput. 2008 Jul;20(7):1873-97. doi: 10.1162/neco.2008.05-07-517.
We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to regularized learning algorithms. All of these algorithms are consistent kernel methods that can be easily implemented. The intuition behind their derivation is that the same principle allowing for the numerical stabilization of a matrix inversion problem is crucial to avoid overfitting. The various methods have a common derivation but different computational and theoretical properties. We describe examples of such algorithms, analyze their classification performance on several data sets and discuss their applicability to real-world problems.
我们讨论了一类广泛的正则化方法,统称为谱正则化,最初设计用于解决不适定的反问题,这些方法如何产生正则化学习算法。所有这些算法都是一致核方法,易于实现。它们的推导背后的直觉是,允许矩阵求逆问题数值稳定的同一原则对于避免过拟合至关重要。这些方法有一个共同的推导,但具有不同的计算和理论性质。我们描述了这些算法的示例,分析了它们在几个数据集上的分类性能,并讨论了它们在实际问题中的应用。