IEEE Trans Neural Netw Learn Syst. 2018 Jun;29(6):2450-2463. doi: 10.1109/TNNLS.2017.2695223. Epub 2017 May 5.
This paper focuses on the connecting structure of deep neural networks and proposes a layerwise structure learning method based on multiobjective optimization. A model with better generalization can be obtained by reducing the connecting parameters in deep networks. The aim is to find the optimal structure with high representation ability and better generalization for each layer. Then, the visible data are modeled with respect to structure based on the products of experts. In order to mitigate the difficulty of estimating the denominator in PoE, the denominator is simplified and taken as another objective, i.e., the connecting sparsity. Moreover, for the consideration of the contradictory nature between the representation ability and the network connecting sparsity, the multiobjective model is established. An improved multiobjective evolutionary algorithm is used to solve this model. Two tricks are designed to decrease the computational cost according to the properties of input data. The experiments on single-layer level, hierarchical level, and application level demonstrate the effectiveness of the proposed algorithm, and the learned structures can improve the performance of deep neural networks.
本文主要研究深度神经网络的连接结构,提出了一种基于多目标优化的层间结构学习方法。通过减少深度网络中的连接参数,可以获得具有更好泛化能力的模型。目的是为每一层找到具有高表示能力和更好泛化能力的最优结构。然后,基于专家产品对可见数据进行建模。为了减轻估计 PoE 分母的难度,简化了分母并将其作为另一个目标,即连接稀疏性。此外,考虑到表示能力和网络连接稀疏性之间的矛盾性质,建立了多目标模型。使用改进的多目标进化算法来解决这个模型。根据输入数据的特性,设计了两个技巧来降低计算成本。单层、分层和应用层的实验表明了所提出算法的有效性,并且学习到的结构可以提高深度神经网络的性能。