Hilberg W
Technische Universität Darmstadt, Fachgebiet Digitaltechnik, Germany.
Biol Cybern. 1997 Jan;76(1):23-40. doi: 10.1007/s004220050318.
Existing artificial neural network models are not very successful in understanding or generating natural language texts. Therefore it is proposed to design novel neural network structures in higher levels of abstraction. This concept leads to a hierarchy of network layers which extract and store local details in every layer and transfer the remaining nonlocal context information to higher levels. At the same time data compression is provided from layer to layer. The use of the same network elements (meta-words) in higher levels for different word series in the basis level is introduced and discussed for grammatical identity or similarity. Thus text can be compressed to forms which are almost free of redundancy. Possible applications are storage, transmission, understanding, generating and translation of texts.
现有的人工神经网络模型在理解或生成自然语言文本方面不太成功。因此,建议在更高的抽象层次上设计新颖的神经网络结构。这一概念导致了网络层的层次结构,该结构在每一层提取并存储局部细节,并将其余的非局部上下文信息传递到更高层。同时,层与层之间提供了数据压缩。介绍并讨论了在基础层中针对不同单词序列在更高层使用相同的网络元素(元词)以实现语法一致性或相似性。因此,文本可以被压缩成几乎没有冗余的形式。可能的应用包括文本的存储、传输、理解、生成和翻译。