Suppr超能文献

使用递归神经网络学习词形的表示:对 Sibley、Kello、Plaut 和 Elman(2008)的评论。

Learning representations of wordforms with recurrent networks: comment on sibley, kello, plaut, & elman (2008).

机构信息

Department of Experimental Psychology, University of Bristol Department of Psychology, Royal Holloway, University of London.

出版信息

Cogn Sci. 2009 Sep;33(7):1183-6. doi: 10.1111/j.1551-6709.2009.01062.x.

Abstract

Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C-in-position-1, A-in-position-2, T-in-position-3). The problem with coding letters by position (slot-coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we agree this is a critical problem with many slot-coding schemes, we question whether the sequence encoder model addresses this limitation, and we highlight another deficiency of the model. We conclude that alternative theories are more promising.

摘要

Sibley 等人(2008)报告了一个递归神经网络模型,该模型旨在学习适合书面和口语识别的词形表示。作者声称,他们的序列编码器网络克服了与按位置对字母进行编码的模型(例如,CAT 可能被编码为 C 在位置 1、A 在位置 2、T 在位置 3)相关的一个关键限制。按位置对字母进行编码(插槽编码)的问题在于很难跨位置泛化知识;例如,CAT 和 TOMCAT 之间的重叠丢失了。虽然我们同意这是许多插槽编码方案的一个关键问题,但我们质疑序列编码器模型是否解决了这个限制,并强调了模型的另一个缺陷。我们的结论是,替代理论更有前途。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验