Masana Marc, Liu Xialei, Twardowski Bartlomiej, Menta Mikel, Bagdanov Andrew D, van de Weijer Joost
IEEE Trans Pattern Anal Mach Intell. 2023 May;45(5):5513-5533. doi: 10.1109/TPAMI.2022.3213473. Epub 2023 Apr 3.
For future learning systems, incremental learning is desirable because it allows for: efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data; reduced memory usage by preventing or limiting the amount of data required to be stored - also important when privacy limitations are imposed; and learning that more closely resembles human learning. The main challenge for incremental learning is catastrophic forgetting, which refers to the precipitous drop in performance on previously learned tasks after learning a new one. Incremental learning of deep neural networks has seen explosive growth in recent years. Initial work focused on task-incremental learning, where a task-ID is provided at inference time. Recently, we have seen a shift towards class-incremental learning where the learner must discriminate at inference time between all classes seen in previous tasks without recourse to a task-ID. In this paper, we provide a complete survey of existing class-incremental learning methods for image classification, and in particular, we perform an extensive experimental evaluation on thirteen class-incremental methods. We consider several new experimental scenarios, including a comparison of class-incremental methods on multiple large-scale image classification datasets, an investigation into small and large domain shifts, and a comparison of various network architectures.
对于未来的学习系统而言,增量学习是可取的,因为它具有以下优点:通过在新数据到来时无需从头重新训练,实现资源的高效利用;通过防止或限制所需存储的数据量来减少内存使用——在存在隐私限制时这一点也很重要;以及实现更接近人类学习的学习方式。增量学习面临的主要挑战是灾难性遗忘,即学习新任务后,先前学习任务的性能急剧下降。近年来,深度神经网络的增量学习取得了迅猛发展。最初的工作集中在任务增量学习上,即在推理时提供任务ID。最近,我们看到了向类别增量学习的转变,在这种学习方式中,学习者在推理时必须在先前任务中看到的所有类别之间进行区分,而无需借助任务ID。在本文中,我们对现有的用于图像分类的类别增量学习方法进行了全面的综述,特别是对13种类别增量方法进行了广泛的实验评估。我们考虑了几种新的实验场景,包括在多个大规模图像分类数据集上对类别增量方法的比较、对小域和大域转移的研究,以及对各种网络架构的比较。