Khodaee Pouya, Viktor Herna L, Michalowski Wojtek
School of Electrical Engineering and Computer Science (EECS), University of Ottawa, 800 King Edward Avenue, Ottawa, ON K1N 6N5 Canada.
Telfer School of Management, University of Ottawa, 55 Laurier Avenue East, Ottawa, ON K1N 6N5 Canada.
Artif Intell Rev. 2024;57(8):217. doi: 10.1007/s10462-024-10853-9. Epub 2024 Jul 26.
ifelong achine earning (LML) denotes a scenario involving multiple sequential tasks, each accompanied by its respective dataset, in order to solve specific learning problems. In this context, the focus of LML techniques is on utilizing already acquired knowledge to adapt to new tasks efficiently. Essentially, LML concerns about facing new tasks while exploiting the knowledge previously gathered from earlier tasks not only to help in adapting to new tasks but also to enrich the understanding of past ones. By understanding this concept, one can better grasp one of the major obstacles in LML, known as nowledge ransfer (KT). This systematic literature review aims to explore state-of-the-art KT techniques within LML and assess the evaluation metrics and commonly utilized datasets in this field, thereby keeping the LML research community updated with the latest developments. From an initial pool of 417 articles from four distinguished databases, 30 were deemed highly pertinent for the information extraction phase. The analysis recognizes four primary KT techniques: Replay, Regularization, Parameter Isolation, and Hybrid. This study delves into the characteristics of these techniques across both neural network (NN) and non-neural network (non-NN) frameworks, highlighting their distinct advantages that have captured researchers' interest. It was found that the majority of the studies focused on supervised learning within an NN modelling framework, particularly employing Parameter Isolation and Hybrid for KT. The paper concludes by pinpointing research opportunities, including investigating non-NN models for Replay and exploring applications outside of computer vision (CV).
终身机器学习(LML)指的是一种涉及多个连续任务的场景,每个任务都有其相应的数据集,以解决特定的学习问题。在这种情况下,LML技术的重点是利用已经获取的知识来高效适应新任务。本质上,LML关注在面对新任务时利用先前从早期任务中收集的知识,这不仅有助于适应新任务,还能丰富对过去任务的理解。通过理解这一概念,人们可以更好地把握LML中的一个主要障碍,即知识转移(KT)。本系统文献综述旨在探索LML中最先进的KT技术,并评估该领域的评估指标和常用数据集,从而使LML研究社区了解最新进展。从四个著名数据库的417篇文章的初始库中,30篇被认为与信息提取阶段高度相关。分析识别出四种主要的KT技术:重放、正则化、参数隔离和混合。本研究深入探讨了这些技术在神经网络(NN)和非神经网络(非NN)框架中的特点,突出了它们吸引研究人员兴趣的独特优势。研究发现,大多数研究集中在NN建模框架内的监督学习上,特别是在KT中采用参数隔离和混合技术。本文最后指出了研究机会,包括研究用于重放的非NN模型以及探索计算机视觉(CV)之外的应用。