Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan.
Center for Cyber-Physical Systems (C2PS), Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi 127788, United Arab Emirates.
Sensors (Basel). 2022 Feb 21;22(4):1667. doi: 10.3390/s22041667.
Human beings tend to incrementally learn from the rapidly changing environment without comprising or forgetting the already learned representations. Although deep learning also has the potential to mimic such human behaviors to some extent, it suffers from catastrophic forgetting due to which its performance on already learned tasks drastically decreases while learning about newer knowledge. Many researchers have proposed promising solutions to eliminate such catastrophic forgetting during the knowledge distillation process. However, to our best knowledge, there is no literature available to date that exploits the complex relationships between these solutions and utilizes them for the effective learning that spans over multiple datasets and even multiple domains. In this paper, we propose a continual learning objective that encompasses mutual distillation loss to understand such complex relationships and allows deep learning models to effectively retain the prior knowledge while adapting to the new classes, new datasets, and even new applications. The proposed objective was rigorously tested on nine publicly available, multi-vendor, and multimodal datasets that span over three applications, and it achieved the top-1 accuracy of 0.9863% and an F1-score of 0.9930.
人类倾向于从快速变化的环境中逐步学习,而不会丢失或忘记已经学到的表示。虽然深度学习也有在某种程度上模拟这种人类行为的潜力,但它会遭受灾难性遗忘,从而导致其在已经学习的任务上的性能大幅下降,而在学习新的知识时却有所提高。许多研究人员已经提出了有前途的解决方案,可以在知识蒸馏过程中消除这种灾难性遗忘。然而,据我们所知,目前还没有文献利用这些解决方案之间的复杂关系,并利用它们进行有效的学习,跨越多个数据集,甚至多个领域。在本文中,我们提出了一种持续学习目标,包含相互蒸馏损失,以理解这种复杂的关系,并允许深度学习模型在适应新类、新数据集,甚至新应用程序的同时,有效地保留先前的知识。该目标在九个公开的、多供应商和多模态数据集上进行了严格测试,涵盖了三个应用程序,达到了 0.9863%的 top-1 准确率和 0.9930 的 F1 分数。