Friedrich Johannes, Yang Weijian, Soudry Daniel, Mu Yu, Ahrens Misha B, Yuste Rafael, Peterka Darcy S, Paninski Liam
Department of Statistics, Grossman Center for the Statistics of Mind, and Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America.
Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America.
PLoS Comput Biol. 2017 Aug 3;13(8):e1005685. doi: 10.1371/journal.pcbi.1005685. eCollection 2017 Aug.
Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to "zoom out" by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution.
现代神经科学的进展严重依赖于我们以细胞空间和高时间分辨率观察大量神经元群体活动的能力。然而,有两个瓶颈限制了对大量神经元群体进行快速成像的努力。首先,由此产生的大量视频数据难以分析。其次,在成像速度、信噪比和视野之间存在明确的权衡:使用当前的记录技术,我们无法同时以高空间和时间分辨率对非常大的神经元群体进行成像。在这里,我们描述了缓解这两个瓶颈的多尺度方法。首先,我们表明基于简单局部平均的空间和时间抽取技术在将时空混合的钙视频数据解混为单细胞神经活动估计值时能提供数量级的加速。其次,一旦在精细尺度上识别出单个神经元的形状(例如,在以标准时间和空间分辨率进行传统成像的初始阶段之后),我们发现空间/时间分辨率的权衡发生了巨大变化:解混后,我们可以从空间上抽取了一个数量级的粗尺度数据中准确恢复每个单个神经元的去噪荧光迹线和解卷积神经活动。这提供了一种压缩这种大量视频数据的廉价方法,也意味着可以显著加快成像速度,或者以相应的因子“缩小”以成像数量级更大的神经元群体,而准确性或时间分辨率的损失最小。