Wu Ruiqi, Zhou Feng, Li Nan, Liu Xian, Wang Rugang
School of Information Technology, Yancheng Institute of Technology, Yancheng, Jiangsu, China.
PeerJ Comput Sci. 2023 Aug 14;9:e1529. doi: 10.7717/peerj-cs.1529. eCollection 2023.
Handwritten Chinese character recognition (HCCR) is a difficult problem in character recognition. Chinese characters are diverse and many of them are very similar. The HCCR model consumes a large number of computational resources during runtime, making it difficult to deploy to resource-limited development platforms.
In order to reduce the computational consumption and improve the operational efficiency of such models, an improved lightweight HCCR model is proposed in this article. We reconstructed the basic modules of the SqueezeNext network so that the model would be compatible with the introduced attention module and model compression techniques. The proposed Cross-stage Convolutional Block Attention Module (C-CBAM) redeploys the Spatial Attention Module (SAM) and the Channel Attention Module (CAM) according to the feature map characteristics of the deep and shallow layers of the model, targeting enhanced information interaction between the deep and shallow layers. The reformulated intra-stage convolutional kernel importance assessment criterion integrates the normalization nature of the weights and allows for structured pruning in equal proportions for each stage of the model. The quantization aware training is able to map the 32-bit floating-point weights in the pruned model to 8-bit fixed-point weights with minor loss.
Pruning with the new convolutional kernel importance evaluation criterion proposed in this article can achieve a pruning rate of 50.79% with little impact on the accuracy rate. The various optimization methods can compress the model to 1.06 MB and achieve an accuracy of 97.36% on the CASIA-HWDB dataset. Compared with the initial model, the volume is reduced by 87.15%, and the accuracy is improved by 1.71%. The model proposed in this article greatly reduces the running time and storage requirements of the model while maintaining accuracy.
手写汉字识别(HCCR)是字符识别中的一个难题。汉字种类繁多,其中许多非常相似。HCCR模型在运行时消耗大量计算资源,难以部署到资源有限的开发平台上。
为了减少此类模型的计算消耗并提高其运行效率,本文提出了一种改进的轻量级HCCR模型。我们重构了SqueezeNext网络的基本模块,使该模型能够与引入的注意力模块和模型压缩技术兼容。所提出的跨阶段卷积块注意力模块(C-CBAM)根据模型深浅层的特征图特性重新部署空间注意力模块(SAM)和通道注意力模块(CAM),旨在增强深浅层之间的信息交互。重新制定的阶段内卷积核重要性评估标准整合了权重的归一化性质,并允许对模型的每个阶段进行等比例的结构化剪枝。量化感知训练能够将剪枝模型中的32位浮点权重映射为8位定点权重,且损失较小。
使用本文提出的新卷积核重要性评估标准进行剪枝,可以实现50.79%的剪枝率,而对准确率影响很小。各种优化方法可以将模型压缩到1.06MB,并在CASIA-HWDB数据集上达到97.36%的准确率。与初始模型相比,体积减少了87.15%,准确率提高了1.71%。本文提出的模型在保持准确率的同时,大大减少了模型的运行时间和存储需求。