Suppr超能文献

深度卷积神经网络中对象的分层稀疏编码

Hierarchical Sparse Coding of Objects in Deep Convolutional Neural Networks.

作者信息

Liu Xingyu, Zhen Zonglei, Liu Jia

机构信息

Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, China.

Department of Psychology & Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing, China.

出版信息

Front Comput Neurosci. 2020 Dec 9;14:578158. doi: 10.3389/fncom.2020.578158. eCollection 2020.

Abstract

Recently, deep convolutional neural networks (DCNNs) have attained human-level performances on challenging object recognition tasks owing to their complex internal representation. However, it remains unclear how objects are represented in DCNNs with an overwhelming number of features and non-linear operations. In parallel, the same question has been extensively studied in primates' brain, and three types of coding schemes have been found: one object is coded by the entire neuronal population (distributed coding), or by one single neuron (local coding), or by a subset of neuronal population (sparse coding). Here we asked whether DCNNs adopted any of these coding schemes to represent objects. Specifically, we used the population sparseness index, which is widely-used in neurophysiological studies on primates' brain, to characterize the degree of sparseness at each layer in representative DCNNs pretrained for object categorization. We found that the sparse coding scheme was adopted at all layers of the DCNNs, and the degree of sparseness increased along the hierarchy. That is, the coding scheme shifted from distributed-like coding at lower layers to local-like coding at higher layers. Further, the degree of sparseness was positively correlated with DCNNs' performance in object categorization, suggesting that the coding scheme was related to behavioral performance. Finally, with the lesion approach, we demonstrated that both external learning experiences and built-in gating operations were necessary to construct such a hierarchical coding scheme. In sum, our study provides direct evidence that DCNNs adopted a hierarchically-evolved sparse coding scheme as the biological brain does, suggesting the possibility of an implementation-independent principle underling object recognition.

摘要

最近,深度卷积神经网络(DCNN)凭借其复杂的内部表示,在具有挑战性的目标识别任务中达到了人类水平的性能。然而,在具有大量特征和非线性操作的DCNN中,目标是如何被表示的仍不清楚。与此同时,在灵长类动物大脑中也对同样的问题进行了广泛研究,并且发现了三种编码方案:一个物体由整个神经元群体编码(分布式编码),或者由单个神经元编码(局部编码),或者由神经元群体的一个子集编码(稀疏编码)。在这里,我们探讨了DCNN是否采用了这些编码方案中的任何一种来表示物体。具体而言,我们使用了在灵长类动物大脑神经生理学研究中广泛使用的群体稀疏指数,来表征为目标分类预训练的代表性DCNN中各层的稀疏程度。我们发现DCNN的所有层都采用了稀疏编码方案,并且稀疏程度沿层次结构增加。也就是说,编码方案从较低层的类似分布式编码转变为较高层的类似局部编码。此外,稀疏程度与DCNN在目标分类中的性能呈正相关,这表明编码方案与行为表现有关。最后,通过损伤方法,我们证明了外部学习经验和内置门控操作对于构建这样一种层次编码方案都是必要的。总之,我们的研究提供了直接证据,表明DCNN像生物大脑一样采用了层次进化的稀疏编码方案,这暗示了存在一种与实现无关的原则支撑目标识别。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6992/7755594/8a4eedb74d6a/fncom-14-578158-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验