Suppr超能文献

彼时之学,此时之学,以及其间的每一秒:借助模拟人形机器人进行终身学习。

Learning Then, Learning Now, and Every Second in Between: Lifelong Learning With a Simulated Humanoid Robot.

作者信息

Logacjov Aleksej, Kerzel Matthias, Wermter Stefan

机构信息

Department of Informatics, Research Group Knowledge Technology, Universität Hamburg, Hamburg, Germany.

出版信息

Front Neurorobot. 2021 Jul 1;15:669534. doi: 10.3389/fnbot.2021.669534. eCollection 2021.

Abstract

Long-term human-robot interaction requires the continuous acquisition of knowledge. This ability is referred to as lifelong learning (LL). LL is a long-standing challenge in machine learning due to catastrophic forgetting, which states that continuously learning from novel experiences leads to a decrease in the performance of previously acquired knowledge. Two recently published LL approaches are the Growing Dual-Memory (GDM) and the Self-organizing Incremental Neural Network+ (SOINN+). Both are growing neural networks that create new neurons in response to novel sensory experiences. The latter approach shows state-of-the-art clustering performance on sequentially available data with low memory requirements regarding the number of nodes. However, classification capabilities are not investigated. Two novel contributions are made in our research paper: (I) An extended SOINN+ approach, called associative SOINN+ (A-SOINN+), is proposed. It adopts two main properties of the GDM model to facilitate classification. (II) A new LL object recognition dataset (v-NICO-World-LL) is presented. It is recorded in a nearly photorealistic virtual environment, where a virtual humanoid robot manipulates 100 different objects belonging to 10 classes. Real-world and artificially created background images, grouped into four different complexity levels, are utilized. The A-SOINN+ reaches similar state-of-the-art classification accuracy results as the best GDM architecture of this work and consists of 30 to 350 times fewer neurons, evaluated on two LL object recognition datasets, the novel v-NICO-World-LL and the well-known CORe50. Furthermore, we observe an approximately 268 times lower training time. These reduced numbers result in lower memory and computational requirements, indicating higher suitability for autonomous social robots with low computational resources to facilitate a more efficient LL during long-term human-robot interactions.

摘要

长期的人机交互需要持续获取知识。这种能力被称为终身学习(LL)。由于灾难性遗忘,LL在机器学习中一直是一个挑战,灾难性遗忘指出,从新的经验中持续学习会导致先前获取知识的性能下降。最近发表的两种LL方法是生长双记忆(GDM)和自组织增量神经网络+(SOINN+)。两者都是生长神经网络,它们会根据新的感官体验创建新的神经元。后一种方法在顺序可用数据上表现出了先进的聚类性能,对节点数量的内存要求较低。然而,尚未对其分类能力进行研究。我们的研究论文有两个新贡献:(I)提出了一种扩展的SOINN+方法,称为关联SOINN+(A-SOINN+)。它采用了GDM模型的两个主要特性来促进分类。(II)提出了一个新的LL对象识别数据集(v-NICO-World-LL)。它是在一个近乎逼真的虚拟环境中记录的,其中一个虚拟人形机器人操纵属于10个类别的100个不同对象。利用了分为四个不同复杂度级别的真实世界和人工创建的背景图像。在两个LL对象识别数据集(新的v-NICO-World-LL和著名的CORe50)上进行评估时,A-SOINN+达到了与这项工作中最佳GDM架构相似的先进分类准确率结果,并且神经元数量少30至350倍。此外,我们观察到训练时间大约低268倍。这些减少的数字导致更低的内存和计算要求,表明对于计算资源低的自主社交机器人在长期人机交互期间促进更高效的LL具有更高的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7ac/8281815/59895adf466d/fnbot-15-669534-g0002.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验