Ohta Hiroyuki, Gunji Yukio Pegio
Graduate School of Science and Technology, Kobe University, Rokkodai, Nada, Kobe, Japan.
Neural Netw. 2006 Oct;19(8):1106-19. doi: 10.1016/j.neunet.2006.06.005. Epub 2006 Sep 20.
We propose a recurrent neural network architecture that is capable of incremental learning and test the performance of the network. In incremental learning, the consistency between the existing internal representation and a new sequence is unknown, so it is not appropriate to overwrite the existing internal representation on each new sequence. In the proposed model, the parallel pathways from input to output are preserved as possible, and the pathway which has emitted the wrong output is inhibited by the previously fired pathway. Accordingly, the network begins to try other pathways ad hoc. This modeling approach is based on the concept of the parallel pathways from input to output, instead of the view of the brain as the integration of the state spaces. We discuss the extension of this approach to building a model of the higher functions such as decision making.
我们提出了一种能够进行增量学习的循环神经网络架构,并测试了该网络的性能。在增量学习中,现有内部表示与新序列之间的一致性是未知的,因此在每个新序列上覆盖现有内部表示是不合适的。在所提出的模型中,从输入到输出的并行路径尽可能地保留,并且发出错误输出的路径会被先前激活的路径抑制。因此,网络开始临时尝试其他路径。这种建模方法基于从输入到输出的并行路径的概念,而不是将大脑视为状态空间的整合的观点。我们讨论了将这种方法扩展到构建诸如决策等高级功能模型的问题。