Grewe Jan, Weckström Matti, Egelhaaf Martin, Warzecha Anne-Kathrin
Department of Neurobiology, Bielefeld University, Bielefeld, Germany.
PLoS One. 2007 Dec 19;2(12):e1328. doi: 10.1371/journal.pone.0001328.
Response variability is a fundamental issue in neural coding because it limits all information processing. The reliability of neuronal coding is quantified by various approaches in different studies. In most cases it is largely unclear to what extent the conclusions depend on the applied reliability measure, making a comparison across studies almost impossible. We demonstrate that different reliability measures can lead to very different conclusions even if applied to the same set of data: in particular, we applied information theoretical measures (Shannon information capacity and Kullback-Leibler divergence) as well as a discrimination measure derived from signal-detection theory to the responses of blowfly photoreceptors which represent a well established model system for sensory information processing. We stimulated the photoreceptors with white noise modulated light intensity fluctuations of different contrasts. Surprisingly, the signal-detection approach leads to a safe discrimination of the photoreceptor response even when the response signal-to-noise ratio (SNR) is well below unity whereas Shannon information capacity and also Kullback-Leibler divergence indicate a very low performance. Applying different measures, can, therefore, lead to very different interpretations concerning the system's coding performance. As a consequence of the lower sensitivity compared to the signal-detection approach, the information theoretical measures overestimate internal noise sources and underestimate the importance of photon shot noise. We stress that none of the used measures and, most likely no other measure alone, allows for an unbiased estimation of a neuron's coding properties. Therefore the applied measure needs to be selected with respect to the scientific question and the analyzed neuron's functional context.
响应变异性是神经编码中的一个基本问题,因为它限制了所有的信息处理。在不同的研究中,神经元编码的可靠性通过各种方法进行量化。在大多数情况下,很大程度上不清楚结论在多大程度上依赖于所应用的可靠性度量,这使得跨研究的比较几乎不可能。我们证明,即使应用于同一组数据,不同的可靠性度量也可能导致非常不同的结论:特别是,我们将信息理论度量(香农信息容量和库尔贝克-莱布勒散度)以及从信号检测理论导出的一种辨别度量应用于果蝇光感受器的响应,果蝇光感受器是一个成熟的感觉信息处理模型系统。我们用不同对比度的白噪声调制光强度波动来刺激光感受器。令人惊讶的是,即使响应信噪比(SNR)远低于1,信号检测方法也能安全地辨别光感受器的响应,而香农信息容量以及库尔贝克-莱布勒散度都表明性能非常低。因此,应用不同的度量可能会导致对系统编码性能的非常不同的解释。由于与信号检测方法相比灵敏度较低,信息理论度量高估了内部噪声源,低估了光子散粒噪声的重要性。我们强调,所使用的任何一种度量,很可能单独使用的任何其他度量,都不能对神经元的编码特性进行无偏估计。因此,需要根据科学问题和所分析神经元的功能背景来选择应用的度量。