Kim HyunJin
School of Electronics and Electrical Engineering, Dankook University, Yongin, South Korea.
PeerJ Comput Sci. 2021 Mar 26;7:e454. doi: 10.7717/peerj-cs.454. eCollection 2021.
This article proposes a novel network model to achieve better accurate residual binarized convolutional neural networks (CNNs), denoted as AresB-Net. Even though residual CNNs enhance the classification accuracy of binarized neural networks with increasing feature resolution, the degraded classification accuracy is still the primary concern compared with real-valued residual CNNs. AresB-Net consists of novel basic blocks to amortize the severe error from the binarization, suggesting a well-balanced pyramid structure without downsampling convolution. In each basic block, the shortcut is added to the convolution output and then concatenated, and then the expanded channels are shuffled for the next grouped convolution. In the downsampling when >1, our model adopts only the max-pooling layer for generating low-cost shortcut. This structure facilitates the feature reuse from the previous layers, thus alleviating the error from the binarized convolution and increasing the classification accuracy with reduced computational costs and small weight storage requirements. Despite low hardware costs from the binarized computations, the proposed model achieves remarkable classification accuracies on the CIFAR and ImageNet datasets.
本文提出了一种新颖的网络模型,以实现更精确的残差二值化卷积神经网络(CNN),称为AresB-Net。尽管残差CNN随着特征分辨率的提高增强了二值化神经网络的分类精度,但与实值残差CNN相比,分类精度下降仍然是主要问题。AresB-Net由新颖的基本块组成,以消除二值化带来的严重误差,提出了一种没有下采样卷积的平衡金字塔结构。在每个基本块中,将捷径添加到卷积输出然后连接,然后对扩展通道进行混洗以进行下一组卷积。在步长>1的下采样中,我们的模型仅采用最大池化层来生成低成本捷径。这种结构有助于从前一层重用特征,从而减轻二值化卷积带来的误差,并以降低的计算成本和较小的权重存储需求提高分类精度。尽管二值化计算的硬件成本较低,但所提出的模型在CIFAR和ImageNet数据集上实现了显著的分类精度。