Suppr超能文献

The brain's representations may be compatible with convolution-based memory models.

作者信息

Kato Kenichi, Caplan Jeremy B

机构信息

Department of Psychology, University of Alberta.

出版信息

Can J Exp Psychol. 2017 Dec;71(4):299-312. doi: 10.1037/cep0000115. Epub 2017 Feb 13.

Abstract

Convolution is a mathematical operation used in vector-models of memory that have been successful in explaining a broad range of behaviour, including memory for associations between pairs of items, an important primitive of memory upon which a broad range of everyday memory behaviour depends. However, convolution models have trouble with naturalistic item representations, which are highly auto-correlated (as one finds, e.g., with photographs), and this has cast doubt on their neural plausibility. Consequently, modellers working with convolution have used item representations composed of randomly drawn values, but introducing so-called noise-like representation raises the question how those random-like values might relate to actual item properties. We propose that a compromise solution to this problem may already exist. It has also long been known that the brain tends to reduce auto-correlations in its inputs. For example, centre-surround cells in the retina approximate a Difference-of-Gaussians (DoG) transform. This enhances edges, but also turns natural images into images that are closer to being statistically like white noise. We show the DoG-transformed images, although not optimal compared to noise-like representations, survive the convolution model better than naturalistic images. This is a proof-of-principle that the pervasive tendency of the brain to reduce auto-correlations may result in representations of information that are already adequately compatible with convolution, supporting the neural plausibility of convolution-based association-memory. (PsycINFO Database Record

摘要

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验