Suppr超能文献

用于深度单图像超分辨率网络的极其简单的二值化方法

Embarrassingly Simple Binarization for Deep Single Imagery Super-Resolution Networks.

作者信息

Zhang Lei, Lang Zhiqiang, Wei Wei, Zhang Yanning

出版信息

IEEE Trans Image Process. 2021;30:3934-3945. doi: 10.1109/TIP.2021.3066906. Epub 2021 Mar 26.

Abstract

Deep convolutional neural networks (DCCNs) have shown pleasing performance in single image super-resolution (SISR). To deploy them onto real devices with limited storage and computational resources, a promising solution is to binarize the network, i.e., quantize each float-point weight and activation into 1 bit. However, existing works on binarizing DCNNs still suffer from severe performance degradation in SISR. To mitigate this problem, we argue that the performance degradation mainly comes from no appropriate constraint on the network weights, which causes it difficult to sensitively reverse the binarization results of these weights using the backpropagated gradient during training and thus limits the flexibility of network in respect of fitting extensive training samples. Inspired by this, we present an embarrassingly simple but effective binarization scheme for SISR, which can obviously relieve the performance degeneration resulted from network binarization and is applicable to different DCNN architectures. Specifically, we force each weight to follow a compact uniform prior, with which the weight will be given a very small absolute value close to zero and its binarization result can be straightforwardly reversed even by a small backpropagated gradient. By doing this, the flexibility and the generalization performance of the binarized network can be improved. Moreover, such a prior performs much better when introducing real identity shortcuts into the network. In addition, to avoid falling into bad local minima during training, we employ a pixel-wise curriculum learning strategy to learn the constrained weights in an easy-to-hard manner. Experiments on four SISR benchmark datasets demonstrate the effectiveness of the proposed binarization method in terms of binarizing different SISR network architectures, e.g., it even achieves performance comparable to the baseline with 5 quantization bits.

摘要

深度卷积神经网络(DCCN)在单图像超分辨率(SISR)中表现出了令人满意的性能。为了将它们部署到存储和计算资源有限的实际设备上,一个有前景的解决方案是对网络进行二值化,即将每个浮点权重和激活量化为1位。然而,现有的DCNN二值化工作在SISR中仍然存在严重的性能下降问题。为了缓解这个问题,我们认为性能下降主要源于对网络权重没有适当的约束,这使得在训练过程中难以利用反向传播的梯度灵敏地反转这些权重的二值化结果,从而限制了网络拟合大量训练样本的灵活性。受此启发,我们提出了一种用于SISR的极其简单但有效的二值化方案,它可以明显缓解网络二值化导致的性能退化,并且适用于不同的DCNN架构。具体来说,我们强制每个权重遵循一个紧凑的均匀先验,在此先验下,权重将被赋予一个非常小的接近零的绝对值,并且即使通过一个小的反向传播梯度,其二值化结果也可以直接反转。通过这样做,可以提高二值化网络的灵活性和泛化性能。此外,当将真实身份捷径引入网络时,这样的先验表现得更好。此外,为了避免在训练过程中陷入不良的局部最小值,我们采用逐像素的课程学习策略以由易到难的方式学习受约束的权重。在四个SISR基准数据集上的实验证明了所提出的二值化方法在二值化不同SISR网络架构方面的有效性,例如,它甚至实现了与具有5位量化的基线相当的性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验