IEEE Trans Neural Netw Learn Syst. 2022 Jun;33(6):2575-2585. doi: 10.1109/TNNLS.2021.3094139. Epub 2022 Jun 1.
Differentiable neural computers (DNCs) extend artificial neural networks with an explicit memory without interference, thus enabling the model to perform classic computation tasks, such as graph traversal. However, such models are difficult to train, requiring long training times and large datasets. In this work, we achieve some of the computational capabilities of DNCs with a model that can be trained very efficiently, namely, an echo state network with an explicit memory without interference. This extension enables echo state networks to recognize all regular languages, including those that contractive echo state networks provably cannot recognize. Furthermore, we demonstrate experimentally that our model performs comparably to its fully trained deep version on several typical benchmark tasks for DNCs.
可微分神经计算机 (DNC) 为人工神经网络扩展了无干扰的显式内存,从而使模型能够执行经典计算任务,例如图遍历。然而,此类模型的训练难度较大,需要较长的训练时间和较大的数据集。在这项工作中,我们使用一种可以非常高效地训练的模型,即具有无干扰显式内存的回声状态网络,实现了 DNC 的部分计算能力。这种扩展使回声状态网络能够识别所有正则语言,包括收缩回声状态网络可证明无法识别的语言。此外,我们通过实验证明,在几个典型的 DNC 基准任务上,我们的模型与完全训练的深度版本性能相当。