Wang Peisong, Chen Weihan, He Xiangyu, Chen Qiang, Liu Qingshan, Cheng Jian
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):2119-2135. doi: 10.1109/TPAMI.2022.3159369. Epub 2023 Jan 6.
Deep neural networks have shown great promise in various domains. Meanwhile, problems including the storage and computing overheads arise along with these breakthroughs. To solve these problems, network quantization has received increasing attention due to its high efficiency and hardware-friendly property. Nonetheless, most existing quantization approaches rely on the full training dataset and the time-consuming fine-tuning process to retain accuracy. Post-training quantization does not have these problems, however, it has mainly been shown effective for 8-bit quantization. In this paper, we theoretically analyze the effect of network quantization and show that the quantization loss in the final output layer is bounded by the layer-wise activation reconstruction error. Based on this analysis, we propose an Optimization-based Post-training Quantization framework and a novel Bit-split optimization approach to achieve minimal accuracy degradation. The proposed framework is validated on a variety of computer vision tasks, including image classification, object detection, instance segmentation, with various network architectures. Specifically, we achieve near-original model performance even when quantizing FP32 models to 3-bit without fine-tuning.
深度神经网络在各个领域都展现出了巨大的潜力。与此同时,随着这些突破也出现了包括存储和计算开销在内的问题。为了解决这些问题,网络量化因其高效性和硬件友好性而受到越来越多的关注。尽管如此,大多数现有的量化方法依赖于完整的训练数据集和耗时的微调过程来保持精度。训练后量化不存在这些问题,然而,它主要在8位量化中显示出有效性。在本文中,我们从理论上分析了网络量化的效果,并表明最终输出层的量化损失受逐层激活重建误差的限制。基于这一分析,我们提出了一种基于优化的训练后量化框架和一种新颖的位拆分优化方法,以实现最小的精度下降。所提出的框架在包括图像分类、目标检测、实例分割等各种计算机视觉任务以及各种网络架构上得到了验证。具体而言,即使在不进行微调的情况下将FP32模型量化到3位,我们也能实现接近原始模型的性能。