Li Hailiang, Weng Jian, Mao Yijun, Wang Yonghua, Zhan Yiju, Cai Qingling, Gu Wanrong
IEEE Trans Neural Netw Learn Syst. 2021 Sep;32(9):4267-4276. doi: 10.1109/TNNLS.2021.3070895. Epub 2021 Aug 31.
Dropout is one of the most widely used methods to avoid overfitting neural networks. However, it rigidly and randomly activates neurons according to a fixed probability, which is not consistent with the activation mode of neurons in the human cerebral cortex. Inspired by gene theory and the activation mechanism of brain neurons, we propose a more intelligent adaptive dropout, in which a variational self-encoder (VAE) overlaps to an existing neural network to regularize its hidden neurons by adaptively setting activities to zero. Through alternating iterative training, the discarding probability of each hidden neuron can be learned according to the weights and thus effectively avoid the shortcomings of the standard dropout method. The experimental results in multiple data sets illustrate that this method can better suppress overfitting in various neural networks than can the standard dropout. Additionally, this adaptive dropout technique can reduce the number of neurons and improve training efficiency.
随机失活是避免神经网络过拟合最常用的方法之一。然而,它按照固定概率严格且随机地激活神经元,这与人类大脑皮层中神经元的激活模式不一致。受基因理论和大脑神经元激活机制的启发,我们提出了一种更智能的自适应随机失活方法,其中变分自编码器(VAE)与现有的神经网络重叠,通过自适应地将活动设置为零来对其隐藏神经元进行正则化。通过交替迭代训练,可以根据权重学习每个隐藏神经元的丢弃概率,从而有效避免标准随机失活方法的缺点。多个数据集的实验结果表明,该方法比标准随机失活方法能更好地抑制各种神经网络中的过拟合。此外,这种自适应随机失活技术可以减少神经元数量并提高训练效率。