Suppr超能文献

探究卷积神经网络中辍学引起的相关性变异的结构和功能特性。

Probing the Structure and Functional Properties of the Dropout-Induced Correlated Variability in Convolutional Neural Networks.

机构信息

Department of Computer Science, University of Miami, Coral Gables, FL 33146, U.S.A.

Department of Systems and Computational Biology, Dominick Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY 10461, U.S.A.

出版信息

Neural Comput. 2024 Mar 21;36(4):621-644. doi: 10.1162/neco_a_01652.

Abstract

Computational neuroscience studies have shown that the structure of neural variability to an unchanged stimulus affects the amount of information encoded. Some artificial deep neural networks, such as those with Monte Carlo dropout layers, also have variable responses when the input is fixed. However, the structure of the trial-by-trial neural covariance in neural networks with dropout has not been studied, and its role in decoding accuracy is unknown. We studied the above questions in a convolutional neural network model with dropout in both the training and testing phases. We found that trial-by-trial correlation between neurons (i.e., noise correlation) is positive and low dimensional. Neurons that are close in a feature map have larger noise correlation. These properties are surprisingly similar to the findings in the visual cortex. We further analyzed the alignment of the main axes of the covariance matrix. We found that different images share a common trial-by-trial noise covariance subspace, and they are aligned with the global signal covariance. This evidence that the noise covariance is aligned with signal covariance suggests that noise covariance in dropout neural networks reduces network accuracy, which we further verified directly with a trial-shuffling procedure commonly used in neuroscience. These findings highlight a previously overlooked aspect of dropout layers that can affect network performance. Such dropout networks could also potentially be a computational model of neural variability.

摘要

计算神经科学研究表明,对不变刺激的神经变异性结构会影响编码的信息量。一些人工深度神经网络,如具有蒙特卡罗辍学层的网络,在输入固定时也会有可变的响应。然而,辍学神经网络中逐次试验神经协方差的结构尚未被研究,其在解码准确性中的作用也未知。我们在训练和测试阶段都有辍学的卷积神经网络模型中研究了上述问题。我们发现神经元之间的逐次试验相关性(即噪声相关性)是正的且低维的。在特征图中接近的神经元具有更大的噪声相关性。这些性质与视觉皮层的发现惊人地相似。我们进一步分析了协方差矩阵主轴的对准。我们发现不同的图像共享一个共同的逐次试验噪声协方差子空间,并且它们与全局信号协方差对齐。噪声协方差与信号协方差对齐的这一证据表明,辍学神经网络中的噪声协方差会降低网络的准确性,我们通过神经科学中常用的逐次试验洗牌过程直接验证了这一点。这些发现突出了辍学层中一个以前被忽视的方面,这可能会影响网络性能。这种辍学网络也可能是神经变异性的计算模型。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21d1/11164410/9cabbacf1299/nihms-1999170-f0001.jpg

相似文献

5
Anterior Approach Total Ankle Arthroplasty with Patient-Specific Cut Guides.使用患者特异性截骨导向器的前路全踝关节置换术。
JBJS Essent Surg Tech. 2025 Aug 15;15(3). doi: 10.2106/JBJS.ST.23.00027. eCollection 2025 Jul-Sep.
9
Systemic treatments for metastatic cutaneous melanoma.转移性皮肤黑色素瘤的全身治疗
Cochrane Database Syst Rev. 2018 Feb 6;2(2):CD011123. doi: 10.1002/14651858.CD011123.pub2.

本文引用的文献

2
The Geometry of Information Coding in Correlated Neural Populations.相关神经元群体中的信息编码的几何学。
Annu Rev Neurosci. 2021 Jul 8;44:403-424. doi: 10.1146/annurev-neuro-120320-082744. Epub 2021 Apr 16.
5
Fundamental bounds on the fidelity of sensory cortical coding.感觉皮层编码保真度的基本界限。
Nature. 2020 Apr;580(7801):100-105. doi: 10.1038/s41586-020-2130-2. Epub 2020 Mar 18.
7
High-dimensional geometry of population responses in visual cortex.群体视觉皮层反应的高维几何结构。
Nature. 2019 Jul;571(7765):361-365. doi: 10.1038/s41586-019-1346-5. Epub 2019 Jun 26.
10
Circuit Models of Low-Dimensional Shared Variability in Cortical Networks.皮层网络中低维共享可变性的电路模型。
Neuron. 2019 Jan 16;101(2):337-348.e4. doi: 10.1016/j.neuron.2018.11.034. Epub 2018 Dec 20.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验