Peng Hanyu, Wu Jiaxiang, Zhang Zhiwei, Chen Shifeng, Zhang Hai-Tao
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4960-4970. doi: 10.1109/TNNLS.2021.3064293. Epub 2022 Aug 31.
For portable devices with limited resources, it is often difficult to deploy deep networks due to the prohibitive computational overhead. Numerous approaches have been proposed to quantize weights and/or activations to speed up the inference. Loss-aware quantization has been proposed to directly formulate the impact of weight quantization on the model's final loss. However, we discover that, under certain circumstances, such a method may not converge and end up oscillating. To tackle this issue, we introduce a novel loss-aware quantization algorithm to efficiently compress deep networks with low bit-width model weights. We provide a more accurate estimation of gradients by leveraging the Taylor expansion to compensate for the quantization error, which leads to better convergence behavior. Our theoretical analysis indicates that the gradient mismatch issue can be fixed by the newly introduced quantization error compensation term. Experimental results for both linear models and convolutional networks verify the effectiveness of our proposed method.
对于资源有限的便携式设备,由于计算开销过高,通常很难部署深度网络。已经提出了许多方法来量化权重和/或激活值以加速推理。有人提出了损失感知量化,以直接阐述权重量化对模型最终损失的影响。然而,我们发现,在某些情况下,这种方法可能不会收敛,最终会出现振荡。为了解决这个问题,我们引入了一种新颖的损失感知量化算法,以有效地压缩具有低比特宽度模型权重的深度网络。我们通过利用泰勒展开来补偿量化误差,从而提供更准确的梯度估计,这导致了更好的收敛行为。我们的理论分析表明,新引入的量化误差补偿项可以解决梯度不匹配问题。线性模型和卷积网络的实验结果验证了我们提出的方法的有效性。