Jiang Xinrui, Wang Nannan, Xin Jingwei, Li Keyu, Yang Xi, Li Jie, Gao Xinbo
IEEE Trans Neural Netw Learn Syst. 2024 Mar;35(3):3989-4001. doi: 10.1109/TNNLS.2022.3201528. Epub 2024 Feb 29.
Binary neural network (BNN) is an effective method for reducing model computational and memory cost, which has achieved much progress in the super-resolution (SR) field. However, there is still a noticeable performance gap between a binary SR network and its full-precision counterpart. Considering that the information density in quantization features is far lower than full-precision features, we aim to improve the precision of quantization features to produce rich-enough output activations for SR task. First, we make several observations that a multibit value could be approximated by multiple 1-bit values, and the computation power of binary convolution could be improved by approximating the multibit convolution process. Then, we propose a mixed binary representation set to approximate multibit activations, which is effective in compensating the quantization precision loss. Finally, we present a new precision-driven binary convolution (PDBC) module, which increases the convolution precision and protects image detail information without extra computation. Compared with normal binary convolution, our method could largely reduce the information loss caused by binarization. In experiments, our methods consistently show superior performance over the baseline models and can surpass state-of-the-art methods in terms of peak signal to noise ratio (PSNR) and visual quality.
二值神经网络(BNN)是一种降低模型计算和内存成本的有效方法,在超分辨率(SR)领域已取得很大进展。然而,二值SR网络与其全精度对应网络之间仍存在明显的性能差距。考虑到量化特征中的信息密度远低于全精度特征,我们旨在提高量化特征的精度,以便为SR任务生成足够丰富的输出激活。首先,我们进行了一些观察,发现多位值可以由多个1位值近似,并且通过近似多位卷积过程可以提高二值卷积的计算能力。然后,我们提出了一种混合二值表示集来近似多位激活,这对于补偿量化精度损失是有效的。最后,我们提出了一种新的精度驱动二值卷积(PDBC)模块,该模块提高了卷积精度并保护了图像细节信息,且无需额外计算。与普通二值卷积相比,我们的方法可以大大减少二值化引起的信息损失。在实验中,我们的方法始终表现出优于基线模型的性能,并且在峰值信噪比(PSNR)和视觉质量方面可以超越现有方法。