Zhao Hanbin, Wang Hui, Fu Yongjian, Wu Fei, Li Xi
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5966-5977. doi: 10.1109/TNNLS.2021.3072041. Epub 2022 Oct 5.
With the memory-resource-limited constraints, class-incremental learning (CIL) usually suffers from the "catastrophic forgetting" problem when updating the joint classification model on the arrival of newly added classes. To cope with the forgetting problem, many CIL methods transfer the knowledge of old classes by preserving some exemplar samples into the size-constrained memory buffer. To utilize the memory buffer more efficiently, we propose to keep more auxiliary low-fidelity exemplar samples, rather than the original real-high-fidelity exemplar samples. Such a memory-efficient exemplar preserving scheme makes the old-class knowledge transfer more effective. However, the low-fidelity exemplar samples are often distributed in a different domain away from that of the original exemplar samples, that is, a domain shift. To alleviate this problem, we propose a duplet learning scheme that seeks to construct domain-compatible feature extractors and classifiers, which greatly narrows down the above domain gap. As a result, these low-fidelity auxiliary exemplar samples have the ability to moderately replace the original exemplar samples with a lower memory cost. In addition, we present a robust classifier adaptation scheme, which further refines the biased classifier (learned with the samples containing distillation label knowledge about old classes) with the help of the samples of pure true class labels. Experimental results demonstrate the effectiveness of this work against the state-of-the-art approaches. We will release the code, baselines, and training statistics for all models to facilitate future research.
在内存资源有限的约束下,类别增量学习(CIL)在新添加类别到来时更新联合分类模型时通常会遭受“灾难性遗忘”问题。为了解决遗忘问题,许多CIL方法通过将一些示例样本保存在大小受限的内存缓冲区中来转移旧类别的知识。为了更有效地利用内存缓冲区,我们建议保留更多辅助的低保真示例样本,而不是原始的高保真示例样本。这种内存高效的示例保留方案使旧类知识转移更加有效。然而,低保真示例样本通常分布在与原始示例样本不同的域中,即域偏移。为了缓解这个问题,我们提出了一种双联学习方案,旨在构建域兼容的特征提取器和分类器,这大大缩小了上述域差距。结果,这些低保真辅助示例样本有能力以较低的内存成本适度替代原始示例样本。此外,我们提出了一种鲁棒的分类器适应方案,该方案借助纯真实类标签的样本进一步优化有偏差的分类器(使用包含关于旧类别的蒸馏标签知识的样本进行学习)。实验结果证明了这项工作相对于现有方法的有效性。我们将发布所有模型的代码、基线和训练统计信息,以促进未来的研究。