Nadal J P, Brunel N, Parga N
Laboratoire de Physique Statistique, Ecole Normale Supérieure, Paris, France.
Network. 1998 May;9(2):207-17.
We prove that maximization of mutual information between the output and the input of a feedforward neural network leads to full redundancy reduction under the following sufficient conditions: (i) the input signal is a (possibly nonlinear) invertible mixture of independent components; (ii) there is no input noise; (iii) the activity of each output neuron is a (possibly) stochastic variable with a probability distribution depending on the stimulus through a deterministic function of the inputs (where both the probability distributions and the functions can be different from neuron to neuron); (iv) optimization of the mutual information is performed over all these deterministic functions. This result extends that obtained by Nadal and Parga (1994) who considered the case of deterministic outputs.
我们证明,在前述充分条件下,前馈神经网络输出与输入之间互信息的最大化会导致完全冗余减少:(i)输入信号是独立成分的(可能是非线性的)可逆混合;(ii)不存在输入噪声;(iii)每个输出神经元的活动是一个(可能)随机变量,其概率分布通过输入的确定性函数依赖于刺激(其中概率分布和函数在不同神经元之间可以不同);(iv)互信息的优化是针对所有这些确定性函数进行的。该结果扩展了纳达尔和帕尔加(1994年)所得到的结果,他们考虑的是确定性输出的情况。