Department of Computer Science, Bucknell University.
College of Information Sciences and Computing, The Pennsylvania State University.
Cogn Sci. 2020 Nov;44(11):e12904. doi: 10.1111/cogs.12904.
We demonstrate that the key components of cognitive architectures (declarative and procedural memory) and their key capabilities (learning, memory retrieval, probability judgment, and utility estimation) can be implemented as algebraic operations on vectors and tensors in a high-dimensional space using a distributional semantics model. High-dimensional vector spaces underlie the success of modern machine learning techniques based on deep learning. However, while neural networks have an impressive ability to process data to find patterns, they do not typically model high-level cognition, and it is often unclear how they work. Symbolic cognitive architectures can capture the complexities of high-level cognition and provide human-readable, explainable models, but scale poorly to naturalistic, non-symbolic, or big data. Vector-symbolic architectures, where symbols are represented as vectors, bridge the gap between the two approaches. We posit that cognitive architectures, if implemented in a vector-space model, represent a useful, explanatory model of the internal representations of otherwise opaque neural architectures. Our proposed model, Holographic Declarative Memory (HDM), is a vector-space model based on distributional semantics. HDM accounts for primacy and recency effects in free recall, the fan effect in recognition, probability judgments, and human performance on an iterated decision task. HDM provides a flexible, scalable alternative to symbolic cognitive architectures at a level of description that bridges symbolic, quantum, and neural models of cognition.
我们证明,认知架构的关键组成部分(陈述性记忆和程序性记忆)及其关键能力(学习、记忆检索、概率判断和效用估计)可以使用基于分布语义的模型作为高维空间中的向量和张量的代数运算来实现。基于深度学习的现代机器学习技术的成功基于高维向量空间。然而,尽管神经网络具有令人印象深刻的处理数据以发现模式的能力,但它们通常不模拟高级认知,并且通常不清楚它们是如何工作的。符号认知架构可以捕捉高级认知的复杂性,并提供人类可读的、可解释的模型,但在自然语言、非符号或大数据方面扩展性较差。其中,符号表示为向量的向量符号架构则弥合了这两种方法之间的差距。我们假设,如果认知架构在向量空间模型中实现,那么它代表了一种有用的、可解释的内部表示模型,用于解释原本不透明的神经架构。我们提出的模型,全息陈述性记忆(Holographic Declarative Memory,HDM),是一种基于分布语义的向量空间模型。HDM 解释了自由回忆中的首因效应和近因效应、识别中的扇区效应、概率判断以及人类在迭代决策任务中的表现。HDM 提供了一种灵活的、可扩展的替代方案,用于符号认知架构,其描述级别在符号、量子和神经认知模型之间架起了桥梁。