Department of Neuroscience and Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA.
Neural Comput. 2011 Nov;23(11):2833-67. doi: 10.1162/NECO_a_00196. Epub 2011 Aug 18.
When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.
当观察到神经元尖峰序列时,我们能从中推断出产生它的神经元的哪些性质?回答这个问题的一种自然方法是对神经元的类型做出假设,为该类型选择一个合适的模型,然后选择最有可能产生所观察到的尖峰序列的模型参数。这就是最大似然法。如果神经元遵循简单的积分点火动力学,Paninski、 Pillow 和 Simoncelli(2004)表明,其负对数似然函数是凸的,因此,至少在原则上,可以通过梯度下降技术找到其唯一的全局最小值。然而,许多生物神经元产生的放电行为比简单的积分点火模型所能解释的要丰富得多。例如,这样的模型只保留了对输入的隐式(通过尖峰诱导电流)记忆,而不是显式记忆;一个无法用这种模型解释的生理情况的例子是,如果输入电流增加得非常缓慢,就不会产生放电。因此,我们使用一个扩展的模型(Mihalas 和 Niebur,2009),它能够产生大量复杂的放电模式,同时仍然保持线性。线性很重要,因为它保持了随机变量的分布,并且仍然允许使用最大似然法。在这项研究中,我们表明,尽管这个模型的负对数似然函数的凸性不能保证,但该函数的最小值可以很好地估计模型参数,特别是如果将噪声水平视为一个自由参数。此外,我们表明,一种非线性函数最小化方法(带空间扩张的 r-算法)通常可以达到全局最小值。