School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China.
Neural Netw. 2024 May;173:106163. doi: 10.1016/j.neunet.2024.106163. Epub 2024 Feb 3.
Aiming at the realization of learning continually from an online data stream, replay-based methods have shown superior potential. The main challenge of replay-based methods is the selection of representative samples which are stored in the buffer and replayed. In this paper, we propose the Cross-entropy Contrastive Replay (CeCR) method in the online class-incremental setting. First, we present the Class-focused Memory Retrieval method that proceeds the class-level sampling without replacement. Second, we put forward the class-mean approximation memory update method that selectively replaces the mistakenly classified training samples with samples of current input batch. In addition, the Cross-entropy Contrastive Loss is proposed to implement the model training with obtaining more solid knowledge to achieve effective learning. Experiments show that the CeCR method has comparable or improved performance in two benchmark datasets in comparison with the state-of-the-art methods.
针对从在线数据流中持续学习的实现,基于重放的方法已经显示出了很大的潜力。基于重放的方法的主要挑战是选择在缓冲区中存储并重放的有代表性的样本。在本文中,我们在在线类增量设置中提出了交叉熵对比重放(CeCR)方法。首先,我们提出了类聚焦记忆检索方法,该方法在没有替换的情况下进行类级别的抽样。其次,我们提出了类均值近似记忆更新方法,该方法选择性地用当前输入批次的样本替换错误分类的训练样本。此外,还提出了交叉熵对比损失来实现模型训练,以获得更坚实的知识,从而实现有效的学习。实验表明,CeCR 方法在与最新方法的两个基准数据集的比较中具有可比或改进的性能。