Smalz R, Conrad M
Department of Computer Science, Wayne State University, Detroit, MI 48202, USA.
Biosystems. 1995;34(1-3):161-72. doi: 10.1016/0303-2647(94)01443-b.
A new approach to training recurrent neural networks is applied to temporal neural processing problems. Our method combines Darwinian variation and selection with a credit apportionment mechanism for assigning credit to individual neurons within the groups of competing networks. Interconnections between the networks allow the outputs of neurons in one network to be available to the neurons in other networks. The firing behavior of the neurons in a variety of networks is compared with the corresponding neurons in high performing networks for specific input contexts. Payoffs accorded to neurons in one network can thus be shared with neurons in other networks. Only the best neurons over the entire repertoire of networks are allowed to pass their crucial function-determining parameters on to other neurons. The algorithm is demonstrated with connectionist-type units on several temporal processing tasks and compared to genetic algorithms.
一种训练递归神经网络的新方法被应用于时间神经处理问题。我们的方法将达尔文式的变异和选择与一种信用分配机制相结合,用于在竞争网络组内为单个神经元分配信用。网络之间的互连使得一个网络中神经元的输出可供其他网络中的神经元使用。针对特定输入情境,将各种网络中神经元的放电行为与高性能网络中的相应神经元进行比较。因此,一个网络中神经元获得的收益可以与其他网络中的神经元共享。只有在整个网络库中表现最佳的神经元才被允许将其关键的功能决定参数传递给其他神经元。该算法在几个时间处理任务上用连接主义类型的单元进行了演示,并与遗传算法进行了比较。