Department of Computer Science, University of Otago, New Zealand.
Cognition. 2012 Nov;125(2):288-308. doi: 10.1016/j.cognition.2012.06.006. Epub 2012 Aug 3.
In this article we present a neural network model of sentence generation. The network has both technical and conceptual innovations. Its main technical novelty is in its semantic representations: the messages which form the input to the network are structured as sequences, so that message elements are delivered to the network one at a time. Rather than learning to linearise a static semantic representation as a sequence of words, our network rehearses a sequence of semantic signals, and learns to generate words from selected signals. Conceptually, the network's use of rehearsed sequences of semantic signals is motivated by work in embodied cognition, which posits that the structure of semantic representations has its origin in the serial structure of sensorimotor processing. The rich sequential structure of the network's semantic inputs also allows it to incorporate certain Chomskyan ideas about innate syntactic knowledge and parameter-setting, as well as a more empiricist account of the acquisition of idiomatic syntactic constructions.
在本文中,我们提出了一种句子生成的神经网络模型。该网络具有技术和概念上的创新。其主要技术新颖之处在于其语义表示:构成网络输入的消息被组织成序列,因此消息元素一次被传递到网络中。我们的网络不是学习将静态语义表示线性化为单词序列,而是排练语义信号的序列,并从选定的信号中学习生成单词。从概念上讲,网络使用排练的语义信号序列的动机来自于具身认知的工作,该理论假设语义表示的结构起源于感觉运动处理的序列结构。网络语义输入的丰富顺序结构还允许它结合关于先天句法知识和参数设置的某些乔姆斯基观点,以及对惯用句法结构习得的更经验主义的解释。