Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093-0404, USA.
Neural Comput. 2010 Oct;22(10):2678-97. doi: 10.1162/NECO_a_00018.
We introduce a new family of positive-definite kernels for large margin classification in support vector machines (SVMs). These kernels mimic the computation in large neural networks with one layer of hidden units. We also show how to derive new kernels, by recursive composition, that may be viewed as mapping their inputs through a series of nonlinear feature spaces. These recursively derived kernels mimic the computation in deep networks with multiple hidden layers. We evaluate SVMs with these kernels on problems designed to illustrate the advantages of deep architectures. Compared to previous benchmarks, we find that on some problems, these SVMs yield state-of-the-art results, beating not only other SVMs but also deep belief nets.
我们引入了一类新的支持向量机(SVM)中的正定核,用于大间隔分类。这些核模仿了具有一层隐藏单元的大型神经网络中的计算。我们还展示了如何通过递归组合来推导出新的核,这些核可以看作是将它们的输入映射到一系列非线性特征空间中。这些递归推导的核模仿了具有多个隐藏层的深度网络中的计算。我们在一些旨在说明深度结构优势的问题上评估了这些核的 SVM。与以前的基准相比,我们发现对于某些问题,这些 SVM 产生了最先进的结果,不仅击败了其他 SVM,还击败了深度置信网络。