Carrasco R C, Forcada M L, Valdés-Muñoz M A, Neco R P
Departament de Llenguatges i Sistemes Informàtics, Universitat d'Alacant, E-03071 Alacant, Spain.
Neural Comput. 2000 Sep;12(9):2129-74. doi: 10.1162/089976600300015097.
There has been a lot of interest in the use of discrete-time recurrent neural nets (DTRNN) to learn finite-state tasks, with interesting results regarding the induction of simple finite-state machines from input-output strings. Parallel work has studied the computational power of DTRNN in connection with finite-state computation. This article describes a simple strategy to devise stable encodings of finite-state machines in computationally capable discrete-time recurrent neural architectures with sigmoid units and gives a detailed presentation on how this strategy may be applied to encode a general class of finite-state machines in a variety of commonly used first- and second-order recurrent neural networks. Unlike previous work that either imposed some restrictions to state values or used a detailed analysis based on fixed-point attractors, our approach applies to any positive, bounded, strictly growing, continuous activation function and uses simple bounding criteria based on a study of the conditions under which a proposed encoding scheme guarantees that the DTRNN is actually behaving as a finite-state machine.
人们对使用离散时间递归神经网络(DTRNN)来学习有限状态任务产生了浓厚兴趣,在从输入 - 输出字符串中归纳简单有限状态机方面取得了有趣的成果。并行研究探讨了DTRNN在有限状态计算方面的计算能力。本文描述了一种简单策略,用于在具有Sigmoid单元的具备计算能力的离散时间递归神经架构中设计有限状态机的稳定编码,并详细介绍了如何将该策略应用于在各种常用的一阶和二阶递归神经网络中编码一类通用的有限状态机。与之前要么对状态值施加一些限制,要么基于定点吸引子进行详细分析的工作不同,我们的方法适用于任何正的、有界的、严格递增的连续激活函数,并基于对所提出的编码方案保证DTRNN实际表现为有限状态机的条件的研究,使用简单的边界准则。