Department of Experimental Psychology, University of Bristol Department of Psychology, Royal Holloway, University of London.
Cogn Sci. 2009 Sep;33(7):1183-6. doi: 10.1111/j.1551-6709.2009.01062.x.
Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C-in-position-1, A-in-position-2, T-in-position-3). The problem with coding letters by position (slot-coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we agree this is a critical problem with many slot-coding schemes, we question whether the sequence encoder model addresses this limitation, and we highlight another deficiency of the model. We conclude that alternative theories are more promising.
Sibley 等人(2008)报告了一个递归神经网络模型,该模型旨在学习适合书面和口语识别的词形表示。作者声称,他们的序列编码器网络克服了与按位置对字母进行编码的模型(例如,CAT 可能被编码为 C 在位置 1、A 在位置 2、T 在位置 3)相关的一个关键限制。按位置对字母进行编码(插槽编码)的问题在于很难跨位置泛化知识;例如,CAT 和 TOMCAT 之间的重叠丢失了。虽然我们同意这是许多插槽编码方案的一个关键问题,但我们质疑序列编码器模型是否解决了这个限制,并强调了模型的另一个缺陷。我们的结论是,替代理论更有前途。