College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China.
Comput Intell Neurosci. 2022 May 9;2022:9637460. doi: 10.1155/2022/9637460. eCollection 2022.
To address the problem that some current algorithms suffer from the loss of some important features due to rough feature distillation and the loss of key information in some channels due to compressed channel attention in the network, we propose a progressive multistage distillation network that gradually refines the features in stages to obtain the maximum amount of key feature information in them. In addition, to maximize the network performance, we propose a weight-sharing information lossless attention block to enhance the channel characteristics through a weight-sharing auxiliary path and, at the same time, use convolution layers to model the interchannel dependencies without compression, effectively avoiding the previous problem of information loss in channel attention. Extensive experiments on several benchmark data sets show that the algorithm in this paper achieves a good balance between network performance, the number of parameters, and computational complexity and achieves highly competitive performance in both objective metrics and subjective vision, which indicates the advantages of this paper's algorithm for image reconstruction. It can be seen that this gradual feature distillation from coarse to fine is effective in improving network performance. Our code is available at the following link: https://github.com/Cai631/PMDN.
为了解决一些现有算法因粗糙特征提取而丢失部分重要特征,以及因网络中压缩通道注意力而丢失部分关键信息的问题,我们提出了一种渐进式多阶段蒸馏网络,该网络逐步分阶段细化特征,以获取其中最大数量的关键特征信息。此外,为了最大限度地提高网络性能,我们提出了一种权重共享信息无损注意力块,通过权重共享辅助路径增强通道特征,同时使用卷积层对通道间的依赖关系进行建模,而不会进行压缩,有效地避免了之前通道注意力中信息丢失的问题。在几个基准数据集上的广泛实验表明,本文提出的算法在网络性能、参数数量和计算复杂度之间实现了良好的平衡,在客观指标和主观视觉方面都取得了极具竞争力的性能,这表明本文算法在图像重建方面具有优势。可以看出,这种从粗到细的逐步特征提取在提高网络性能方面是有效的。我们的代码可在以下链接获得:https://github.com/Cai631/PMDN。