Suppr超能文献

V1 中的超越 ℓ1 稀疏编码。

Beyond ℓ1 sparse coding in V1.

机构信息

Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des Signaux et Systèmes, Paris, France.

CNRS, UCA, INRIA, Laboratoire d'Informatique, Signaux et Systèmes de Sophia Antipolis, Sophia Antipolis, France.

出版信息

PLoS Comput Biol. 2023 Sep 12;19(9):e1011459. doi: 10.1371/journal.pcbi.1011459. eCollection 2023 Sep.

Abstract

Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ1 norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ1 norm is highly suboptimal compared to other functions suited to approximating ℓp with 0 ≤ p < 1 (including recently proposed continuous exact relaxations), in terms of performance. We show that ℓ1 sparsity employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. More specifically, at the same sparsity level, the thresholding algorithm using the ℓ1 norm as a penalty requires a dictionary of ten times more units compared to the proposed approach, where a non-convex continuous relaxation of the ℓ0 pseudo-norm is used, to reconstruct the external stimulus equally well. At a fixed sparsity level, both ℓ0- and ℓ1-based regularization develop units with receptive field (RF) shapes similar to biological neurons in V1 (and a subset of neurons in V2), but ℓ0-based regularization shows approximately five times better reconstruction of the stimulus. Our results in conjunction with recent metabolic findings indicate that for V1 to operate efficiently it should follow a coding regime which uses a regularization that is closer to the ℓ0 pseudo-norm rather than the ℓ1 one, and suggests a similar mode of operation for the sensory cortex in general.

摘要

越来越多的证据表明,在任何时刻,只有感觉神经元中的一小部分子集对于视觉刺激的编码是活跃的。传统上,为了复制这种生物稀疏性,生成模型一直使用 L1 范数作为惩罚项,因为它具有凸性,这使得它易于使用快速而简单的算法求解器。在这项工作中,我们以生物视觉为测试平台,结果表明,与其他适合逼近 0 ≤ p < 1 的 Lp 函数(包括最近提出的连续精确松弛)相比,与 L1 范数相关的软阈值操作在性能方面是高度次优的。我们表明,L1 稀疏性使用的神经元池具有更高的过完备性,即具有更多的神经元,以便在与其他考虑的方法保持相同的重建误差。更具体地说,在相同的稀疏度水平下,使用 L1 范数作为惩罚项的阈值算法需要比所提出的方法多十倍的字典单元,其中使用非凸连续松弛的 L0 伪范数来同样很好地重建外部刺激。在固定的稀疏度水平下,L0 和 L1 正则化都可以产生具有与 V1(和 V2 中的一部分神经元)中的生物神经元相似的感受野(RF)形状的单元,但 L0 正则化的刺激重建效果大约要好五倍。我们的结果结合最近的代谢研究结果表明,为了使 V1 高效运行,它应该遵循一种编码方案,该方案使用更接近 L0 伪范数而不是 L1 范数的正则化,这表明一般来说感觉皮层也采用类似的操作模式。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6d58/10516432/b7f2e74ee776/pcbi.1011459.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验