Liu Chang, Lu Jiayu, Wu Yicun, Ye Xin, Ahrens Allison M, Platisa Jelena, Pieribone Vincent A, Chen Jerry L, Tian Lei
Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States.
Boston University, Department of Electrical and Computer Engineering, Boston, Massachusetts, United States.
Neurophotonics. 2024 Oct;11(4):045007. doi: 10.1117/1.NPh.11.4.045007. Epub 2024 Oct 29.
Voltage imaging is a powerful tool for studying the dynamics of neuronal activities in the brain. However, voltage imaging data are fundamentally corrupted by severe Poisson noise in the low-photon regime, which hinders the accurate extraction of neuronal activities. Self-supervised deep learning denoising methods have shown great potential in addressing the challenges in low-photon voltage imaging without the need for ground-truth but usually suffer from the trade-off between spatial and temporal performances.
We present DeepVID v2, a self-supervised denoising framework with decoupled spatial and temporal enhancement capability to significantly augment low-photon voltage imaging.
DeepVID v2 is built on our original DeepVID framework, which performs frame-based denoising by utilizing a sequence of frames around the central frame targeted for denoising to leverage temporal information and ensure consistency. Similar to DeepVID, the network further integrates multiple blind pixels in the central frame to enrich the learning of local spatial information. In addition, DeepVID v2 introduces a new spatial prior extraction branch to capture fine structural details to learn high spatial resolution information. Two variants of DeepVID v2 are introduced to meet specific denoising needs: an online version tailored for real-time inference with a limited number of frames and an offline version designed to leverage the full dataset, achieving optimal temporal and spatial performances.
We demonstrate that DeepVID v2 is able to overcome the trade-off between spatial and temporal performances and achieve superior denoising capability in resolving both high-resolution spatial structures and rapid temporal neuronal activities. We further show that DeepVID v2 can generalize to different imaging conditions, including time-series measurements with various signal-to-noise ratios and extreme low-photon conditions.
Our results underscore DeepVID v2 as a promising tool for enhancing voltage imaging. This framework has the potential to generalize to other low-photon imaging modalities and greatly facilitate the study of neuronal activities in the brain.
电压成像技术是研究大脑神经元活动动态的有力工具。然而,在低光子条件下,电压成像数据会受到严重泊松噪声的根本影响,这阻碍了神经元活动的准确提取。自监督深度学习去噪方法在解决低光子电压成像的挑战方面显示出巨大潜力,无需真实数据,但通常在空间和时间性能之间存在权衡。
我们提出了DeepVID v2,这是一种具有解耦空间和时间增强能力的自监督去噪框架,可显著增强低光子电压成像。
DeepVID v2基于我们最初的DeepVID框架构建,该框架通过利用围绕目标去噪中心帧的一系列帧来执行基于帧的去噪,以利用时间信息并确保一致性。与DeepVID类似,该网络进一步整合中心帧中的多个盲像素,以丰富局部空间信息的学习。此外,DeepVID v2引入了一个新的空间先验提取分支,以捕获精细的结构细节,从而学习高空间分辨率信息。我们引入了两种DeepVID v2变体来满足特定的去噪需求:一种是为具有有限帧数的实时推理量身定制的在线版本,另一种是为利用完整数据集而设计的离线版本,可实现最佳的时间和空间性能。
我们证明,DeepVID v2能够克服空间和时间性能之间的权衡,并在解析高分辨率空间结构和快速时间神经元活动方面实现卓越的去噪能力。我们进一步表明,DeepVID v2可以推广到不同的成像条件,包括具有各种信噪比的时间序列测量和极端低光子条件。
我们的结果强调DeepVID v2是增强电压成像的一个有前途的工具。该框架有可能推广到其他低光子成像模式,并极大地促进大脑神经元活动的研究。