Department of Data Science and Knowledge Engineering, Maastricht University, The Netherlands.
Neural Netw. 2019 Aug;116:46-55. doi: 10.1016/j.neunet.2019.03.011. Epub 2019 Mar 28.
This paper introduces novel deep architectures using the hybrid neural-kernel core model as the first building block. The proposed models follow a combination of a neural networks based architecture and a kernel based model enriched with pooling layers. In particular, in this context three kernel blocks with average, maxout and convolutional pooling layers are introduced and examined. We start with a simple merging layer which averages the output of the previous representation layers. The maxout layer on the other hand triggers competition among different representations of the input. Thanks to this pooling layer, not only the dimensionality of the output of multi-scale representations is reduced but also multiple sub-networks are formed within the same model. In the same context, the pointwise convolutional layer is also employed with the aim of projecting the multi-scale representations onto a new space. Experimental results show an improvement over the core deep hybrid model as well as kernel based models on several real-life datasets.
本文提出了一种新的深度架构,使用混合神经核芯模型作为第一个构建块。所提出的模型遵循基于神经网络的架构和基于内核的模型的组合,其中内核模型中添加了池化层。具体来说,在这种情况下,引入并研究了具有平均池化层、最大池化层和卷积池化层的三个核芯块。我们从一个简单的合并层开始,该层对前一个表示层的输出进行平均。另一方面,最大池化层在输入的不同表示之间引发竞争。由于这种池化层,不仅多尺度表示的输出的维数降低了,而且还在同一个模型中形成了多个子网络。在相同的上下文中,还使用了逐点卷积层,目的是将多尺度表示投影到新的空间。实验结果表明,该方法在多个真实数据集上优于核心深度混合模型和基于内核的模型。