IEEE Trans Neural Netw Learn Syst. 2016 Dec;27(12):2486-2498. doi: 10.1109/TNNLS.2015.2479223. Epub 2015 Oct 28.
We demonstrate a new deep learning autoencoder network, trained by a nonnegativity constraint algorithm (nonnegativity-constrained autoencoder), that learns features that show part-based representation of data. The learning algorithm is based on constraining negative weights. The performance of the algorithm is assessed based on decomposing data into parts and its prediction performance is tested on three standard image data sets and one text data set. The results indicate that the nonnegativity constraint forces the autoencoder to learn features that amount to a part-based representation of data, while improving sparsity and reconstruction quality in comparison with the traditional sparse autoencoder and nonnegative matrix factorization. It is also shown that this newly acquired representation improves the prediction performance of a deep neural network.
我们展示了一种新的深度学习自动编码器网络,该网络由非负约束算法(非负约束自动编码器)训练而成,它可以学习到具有数据部分表示的特征。学习算法基于约束负权重。该算法的性能基于将数据分解为部分进行评估,并且在三个标准图像数据集和一个文本数据集上测试了其预测性能。结果表明,非负约束迫使自动编码器学习到相当于数据部分表示的特征,同时与传统的稀疏自动编码器和非负矩阵分解相比,提高了稀疏性和重建质量。还表明,这种新获得的表示形式提高了深度神经网络的预测性能。