Department of Physics, University of California San Diego, San Diego, CA 92093, U.S.A.
Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, and Department of Physics, University of California San Diego, San Diego, CA 92093, U.S.A.
Neural Comput. 2019 Jun;31(6):1015-1047. doi: 10.1162/neco_a_01193. Epub 2019 Apr 12.
Quantifying mutual information between inputs and outputs of a large neural circuit is an important open problem in both machine learning and neuroscience. However, evaluation of the mutual information is known to be generally intractable for large systems due to the exponential growth in the number of terms that need to be evaluated. Here we show how information contained in the responses of large neural populations can be effectively computed provided the input-output functions of individual neurons can be measured and approximated by a logistic function applied to a potentially nonlinear function of the stimulus. Neural responses in this model can remain sensitive to multiple stimulus components. We show that the mutual information in this model can be effectively approximated as a sum of lower-dimensional conditional mutual information terms. The approximations become exact in the limit of large neural populations and for certain conditions on the distribution of receptive fields across the neural population. We empirically find that these approximations continue to work well even when the conditions on the receptive field distributions are not fulfilled. The computing cost for the proposed methods grows linearly in the dimension of the input and compares favorably with other approximations.
量化大型神经网络输入和输出之间的互信息是机器学习和神经科学中的一个重要开放性问题。然而,由于需要评估的项数呈指数增长,因此对于大型系统,互信息的评估通常是难以处理的。在这里,我们展示了如何在可以测量单个神经元的输入-输出函数并将其近似为应用于刺激的潜在非线性函数的逻辑函数的情况下,有效地计算大型神经元群体的响应中包含的信息。在该模型中,神经响应仍然对多个刺激分量敏感。我们表明,在该模型中,互信息可以有效地近似为较低维条件互信息项的和。在大型神经元群体的极限和神经元群体中感受野分布的某些条件下,这些近似值是精确的。我们通过经验发现,即使感受野分布的条件不满足,这些近似值仍然可以很好地工作。所提出的方法的计算成本与输入的维度呈线性增长,并优于其他近似值。