Zhang Zeyang, Chen Yidong, Zhou Changle
IEEE Trans Neural Netw Learn Syst. 2022 May 27;PP. doi: 10.1109/TNNLS.2022.3176027.
For a deep learning model, the network architecture is crucial as a model with inappropriate architecture often suffers from performance degradation or parameter redundancy. However, it is experiential and difficult to find the appropriate architecture for a certain application. To tackle this problem, we propose a novel deep learning model with dynamic architecture, named self-growing binary activation network (SGBAN), which can extend the design of a fully connected network (FCN) progressively, resulting in a more compact architecture with higher performance on a certain task. This constructing process is more efficient than neural architecture search methods that train mass of networks to search for the optimal one. Concretely, the training technique of SGBAN is based on the function-preserving transformations that can expand the architecture and combine the information in the new data without neglecting the knowledge learned in the previous steps. The experimental results on four different classification tasks, i.e., Iris, MNIST, CIFAR-10, and CIFAR-100, demonstrate the effectiveness of SGBAN. On the one hand, SGBAN achieves competitive accuracy when compared with the FCN composed of the same architecture, which indicates that the new training technique has the equivalent optimization ability as the traditional optimization methods. On the other hand, the architecture generated by SGBAN achieves 0.59% improvements of accuracy, with only 33.44% parameters when compared with the FCNs composed of manual design architectures, i.e., 500 + 150 hidden units, on MNIST. Furthermore, we demonstrate that replacing the fully connected layers of the well-trained VGG-19 with SGBAN can gain a slightly improved performance with less than 1% parameters on all these tasks. Finally, we show that the proposed method can conduct the incremental learning tasks and outperform the three outstanding incremental learning methods, i.e., learning without forgetting, elastic weight consolidation, and gradient episodic memory, on both the incremental learning tasks on Disjoint MNIST and Disjoint CIFAR-10.
对于深度学习模型而言,网络架构至关重要,因为架构不当的模型往往会出现性能下降或参数冗余的问题。然而,要为特定应用找到合适的架构是凭经验的,且难度较大。为解决这一问题,我们提出了一种具有动态架构的新型深度学习模型,名为自增长二进制激活网络(SGBAN),它可以逐步扩展全连接网络(FCN)的设计,从而在特定任务上实现更紧凑的架构和更高的性能。这种构建过程比通过训练大量网络来搜索最优架构的神经架构搜索方法更高效。具体而言,SGBAN的训练技术基于保持函数的变换,这种变换可以扩展架构并整合新数据中的信息,同时不会忽略在先前步骤中学到的知识。在鸢尾花数据集、MNIST、CIFAR - 10和CIFAR - 100这四个不同分类任务上的实验结果证明了SGBAN的有效性。一方面,与具有相同架构的FCN相比,SGBAN实现了具有竞争力的准确率,这表明新的训练技术具有与传统优化方法相当的优化能力。另一方面,在MNIST上,与由手动设计架构(即500 + 150个隐藏单元)组成 的FCN相比,SGBAN生成的架构在准确率上提高了0.59%,而参数仅为33.44%。此外,我们证明了用SGBAN替换训练良好的VGG - 19的全连接层,在所有这些任务上都能以不到1%的参数获得略有提升的性能。最后,我们表明所提出的方法可以进行增量学习任务,并且在不相交MNIST和不相交CIFAR - 10的增量学习任务上优于三种出色的增量学习方法,即无遗忘学习、弹性权重巩固和梯度情景记忆。