Sui Xuefu, Lv Qunbo, Ke Changjun, Li Mingshan, Zhuang Mingjin, Yu Haiyang, Tan Zheng
Aerospace Information Research Institute, Chinese Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China.
Department of Key Laboratory of Computational Optical Imagine Technology, Chinese Academy of Sciences, No. 9 Dengzhuang South Road, Haidian District, Beijing 100094, China.
Sensors (Basel). 2023 Dec 28;24(1):181. doi: 10.3390/s24010181.
In the field of edge computing, quantizing convolutional neural networks (CNNs) using extremely low bit widths can significantly alleviate the associated storage and computational burdens in embedded hardware, thereby improving computational efficiency. However, such quantization also presents a challenge related to substantial decreases in detection accuracy. This paper proposes an innovative method, called Adaptive Global Power-of-Two Ternary Quantization Based on Unfixed Boundary Thresholds (APTQ). APTQ achieves adaptive quantization by quantizing each filter into two binary subfilters represented as power-of-two values, thereby addressing the accuracy degradation caused by a lack of expression ability of low-bit-width weight values and the contradiction between fixed quantization boundaries and the uneven actual weight distribution. It effectively reduces the accuracy loss while at the same time presenting strong hardware-friendly characteristics because of the power-of-two quantization. This paper extends the APTQ algorithm to propose the APQ quantization algorithm, which can adapt to arbitrary quantization bit widths. Furthermore, this paper designs dedicated edge deployment convolutional computation modules for the obtained quantized models. Through quantization comparison experiments with multiple commonly used CNN models utilized on the CIFAR10, CIFAR100, and Mini-ImageNet data sets, it is verified that the APTQ and APQ algorithms possess better accuracy performance than most state-of-the-art quantization algorithms and can achieve results with very low accuracy loss in certain CNNs (e.g., the accuracy loss of the APTQ ternary ResNet-56 model on CIFAR10 is 0.13%). The dedicated convolutional computation modules enable the corresponding quantized models to occupy fewer on-chip hardware resources in edge chips, thereby effectively improving computational efficiency. This adaptive CNN quantization method, combined with the power-of-two quantization results, strikes a balance between the quantization accuracy performance and deployment efficiency in embedded hardware. As such, valuable insights for the industrial edge computing domain can be gained.
在边缘计算领域,使用极低比特宽度对卷积神经网络(CNN)进行量化可以显著减轻嵌入式硬件中的相关存储和计算负担,从而提高计算效率。然而,这种量化也带来了检测精度大幅下降的挑战。本文提出了一种创新方法,称为基于非固定边界阈值的自适应全局二的幂次三值量化(APTQ)。APTQ通过将每个滤波器量化为两个表示为二的幂次值的二进制子滤波器来实现自适应量化,从而解决了低比特宽度权重值表达能力不足导致的精度下降以及固定量化边界与实际权重分布不均匀之间的矛盾。它有效地减少了精度损失,同时由于二的幂次量化而具有很强的硬件友好特性。本文扩展了APTQ算法,提出了APQ量化算法,该算法可以适应任意量化比特宽度。此外,本文为获得的量化模型设计了专用的边缘部署卷积计算模块。通过在CIFAR10、CIFAR100和Mini-ImageNet数据集上与多个常用CNN模型进行量化比较实验,验证了APTQ和APQ算法比大多数现有量化算法具有更好的精度性能,并且在某些CNN中可以实现非常低的精度损失(例如,APTQ三值ResNet-56模型在CIFAR10上的精度损失为0.13%)。专用的卷积计算模块使相应的量化模型在边缘芯片中占用更少的片上硬件资源,从而有效地提高了计算效率。这种自适应CNN量化方法与二的幂次量化结果相结合,在嵌入式硬件中的量化精度性能和部署效率之间取得了平衡。因此,可以为工业边缘计算领域获得有价值的见解。