Department of Computer Science, University of Miami, Coral Gables, FL 33146, U.S.A.
Department of Systems and Computational Biology, Dominick Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY 10461, U.S.A.
Neural Comput. 2024 Mar 21;36(4):621-644. doi: 10.1162/neco_a_01652.
Computational neuroscience studies have shown that the structure of neural variability to an unchanged stimulus affects the amount of information encoded. Some artificial deep neural networks, such as those with Monte Carlo dropout layers, also have variable responses when the input is fixed. However, the structure of the trial-by-trial neural covariance in neural networks with dropout has not been studied, and its role in decoding accuracy is unknown. We studied the above questions in a convolutional neural network model with dropout in both the training and testing phases. We found that trial-by-trial correlation between neurons (i.e., noise correlation) is positive and low dimensional. Neurons that are close in a feature map have larger noise correlation. These properties are surprisingly similar to the findings in the visual cortex. We further analyzed the alignment of the main axes of the covariance matrix. We found that different images share a common trial-by-trial noise covariance subspace, and they are aligned with the global signal covariance. This evidence that the noise covariance is aligned with signal covariance suggests that noise covariance in dropout neural networks reduces network accuracy, which we further verified directly with a trial-shuffling procedure commonly used in neuroscience. These findings highlight a previously overlooked aspect of dropout layers that can affect network performance. Such dropout networks could also potentially be a computational model of neural variability.
计算神经科学研究表明,对不变刺激的神经变异性结构会影响编码的信息量。一些人工深度神经网络,如具有蒙特卡罗辍学层的网络,在输入固定时也会有可变的响应。然而,辍学神经网络中逐次试验神经协方差的结构尚未被研究,其在解码准确性中的作用也未知。我们在训练和测试阶段都有辍学的卷积神经网络模型中研究了上述问题。我们发现神经元之间的逐次试验相关性(即噪声相关性)是正的且低维的。在特征图中接近的神经元具有更大的噪声相关性。这些性质与视觉皮层的发现惊人地相似。我们进一步分析了协方差矩阵主轴的对准。我们发现不同的图像共享一个共同的逐次试验噪声协方差子空间,并且它们与全局信号协方差对齐。噪声协方差与信号协方差对齐的这一证据表明,辍学神经网络中的噪声协方差会降低网络的准确性,我们通过神经科学中常用的逐次试验洗牌过程直接验证了这一点。这些发现突出了辍学层中一个以前被忽视的方面,这可能会影响网络性能。这种辍学网络也可能是神经变异性的计算模型。