College of Biomedical Engineering and Instrument Science, Yuquan Campus, Zhejiang University, 38 Zheda Road, Hangzhou 310027, China.
Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China.
Sensors (Basel). 2021 Sep 25;21(19):6410. doi: 10.3390/s21196410.
Deep learning models, especially recurrent neural networks (RNNs), have been successfully applied to automatic modulation classification (AMC) problems recently. However, deep neural networks are usually overparameterized, i.e., most of the connections between neurons are redundant. The large model size hinders the deployment of deep neural networks in applications such as Internet-of-Things (IoT) networks. Therefore, reducing parameters without compromising the network performance via sparse learning is often desirable since it can alleviates the computational and storage burdens of deep learning models. In this paper, we propose a sparse learning algorithm that can directly train a sparsely connected neural network based on the statistics of weight magnitude and gradient momentum. We first used the MNIST and CIFAR10 datasets to demonstrate the effectiveness of this method. Subsequently, we applied it to RNNs with different pruning strategies on recurrent and non-recurrent connections for AMC problems. Experimental results demonstrated that the proposed method can effectively reduce the parameters of the neural networks while maintaining model performance. Moreover, we show that appropriate sparsity can further improve network generalization ability.
深度学习模型,特别是循环神经网络(RNN),最近已成功应用于自动调制分类(AMC)问题。然而,深度神经网络通常是过参数化的,即神经元之间的大多数连接都是冗余的。大型模型尺寸阻碍了深度神经网络在物联网(IoT)网络等应用中的部署。因此,通过稀疏学习在不影响网络性能的情况下减少参数通常是可取的,因为它可以减轻深度学习模型的计算和存储负担。在本文中,我们提出了一种稀疏学习算法,该算法可以根据权重幅度和梯度动量的统计信息直接训练稀疏连接的神经网络。我们首先使用 MNIST 和 CIFAR10 数据集来证明该方法的有效性。随后,我们将其应用于具有不同剪枝策略的 RNN 上,以解决 AMC 问题中递归和非递归连接的问题。实验结果表明,所提出的方法可以在保持模型性能的同时,有效地减少神经网络的参数。此外,我们表明适当的稀疏性可以进一步提高网络的泛化能力。