Tang Xiaolong, Ye Shuo, Shi Yufeng, Hu Tianheng, Peng Qinmu, You Xinge
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):8401-8413. doi: 10.1109/TNNLS.2024.3415068. Epub 2025 May 2.
Filter pruning has gained widespread adoption for the purpose of compressing and speeding up convolutional neural networks (CNNs). However, the existing approaches are still far from practical applications due to biased filter selection and heavy computation cost. This article introduces a new filter pruning method that selects filters in an interpretable, multiperspective, and lightweight manner. Specifically, we evaluate the contributions of filters from both individual and overall perspectives. For the amount of information contained in each filter, a new metric called information capacity is proposed. Inspired by the information theory, we utilize the interpretable entropy to measure the information capacity and develop a feature-guided approximation process. For correlations among filters, another metric called information independence is designed. Since the aforementioned metrics are evaluated in a simple but effective way, we can identify and prune the least important filters with less computation cost. We conduct comprehensive experiments on benchmark datasets employing various widely used CNN architectures to evaluate the performance of our method. For instance, on ILSVRC-2012, our method outperforms state-of-the-art methods by reducing floating-point operations (FLOPs) by 77.4% and parameters by 69.3% for ResNet-50 with only a minor decrease in an accuracy of 2.64%.
为了压缩和加速卷积神经网络(CNN),滤波器剪枝已得到广泛应用。然而,由于滤波器选择存在偏差和计算成本高昂,现有方法仍远未达到实际应用的要求。本文介绍了一种新的滤波器剪枝方法,该方法以可解释、多视角和轻量级的方式选择滤波器。具体而言,我们从个体和整体两个角度评估滤波器的贡献。对于每个滤波器所包含的信息量,我们提出了一种名为信息容量的新指标。受信息论的启发,我们利用可解释的熵来度量信息容量,并开发了一种特征引导的近似过程。对于滤波器之间的相关性,我们设计了另一个名为信息独立性的指标。由于上述指标的评估方式简单而有效,我们能够以较低的计算成本识别和剪枝最不重要的滤波器。我们在使用各种广泛应用的CNN架构的基准数据集上进行了全面实验,以评估我们方法的性能。例如,在ILSVRC - 2012数据集上,对于ResNet - 50,我们的方法将浮点运算(FLOPs)减少了77.4%,参数减少了69.3%,同时仅使准确率轻微下降了2.64%,优于当前的先进方法。