Suppr超能文献

深度视频(DeepVID)v2:用于低光子电压成像的具有解耦时空增强的自监督去噪

DeepVID v2: Self-Supervised Denoising with Decoupled Spatiotemporal Enhancement for Low-Photon Voltage Imaging.

作者信息

Liu Chang, Lu Jiayu, Wu Yicun, Ye Xin, Ahrens Allison M, Platisa Jelena, Pieribone Vincent A, Chen Jerry L, Tian Lei

机构信息

Boston University, Department of Biomedical Engineering, Boston, MA 02215, USA.

Boston University, Department of Electrical and Computer Engineering, Boston, MA 02215, USA.

出版信息

bioRxiv. 2024 May 16:2024.05.16.594448. doi: 10.1101/2024.05.16.594448.

Abstract

SIGNIFICANCE

Voltage imaging is a powerful tool for studying the dynamics of neuronal activities in the brain. However, voltage imaging data are fundamentally corrupted by severe Poisson noise in the low-photon regime, which hinders the accurate extraction of neuronal activities. Self-supervised deep learning denoising methods have shown great potential in addressing the challenges in low-photon voltage imaging without the need for ground truth, but usually suffer from the tradeoff between spatial and temporal performance.

AIM

We present DeepVID v2, a novel self-supervised denoising framework with decoupled spatial and temporal enhancement capability to significantly augment low-photon voltage imaging.

APPROACH

DeepVID v2 is built on our original DeepVID framework, which performs frame-based denoising by utilizing a sequence of frames around the central frame targeted for denoising to leverage temporal information and ensure consistency. The network further integrates multiple blind pixels in the central frame to enrich the learning of local spatial information. Additionally, DeepVID v2 introduces a new edge extraction branch to capture fine structural details in order to learn high spatial resolution information.

RESULTS

We demonstrate that DeepVID v2 is able to overcome the tradeoff between spatial and temporal performance, and achieve superior denoising capability in resolving both high-resolution spatial structures and rapid temporal neuronal activities. We further show that DeepVID v2 is able to generalize to different imaging conditions, including time-series measurements with various signal-to-noise ratios (SNRs) and in extreme low-photon conditions.

CONCLUSIONS

Our results underscore DeepVID v2 as a promising tool for enhancing voltage imaging. This framework has the potential to generalize to other low-photon imaging modalities and greatly facilitate the study of neuronal activities in the brain.

摘要

意义

电压成像技术是研究大脑神经元活动动态的有力工具。然而,在低光子条件下,电压成像数据会受到严重泊松噪声的根本影响,这阻碍了神经元活动的准确提取。自监督深度学习去噪方法在解决低光子电压成像的挑战方面显示出巨大潜力,无需真实数据作为参考,但通常在空间和时间性能之间存在权衡。

目的

我们提出了DeepVID v2,这是一种新颖的自监督去噪框架,具有解耦的空间和时间增强能力,可显著增强低光子电压成像。

方法

DeepVID v2基于我们原有的DeepVID框架构建,该框架通过利用围绕目标去噪中心帧的一系列帧来执行基于帧的去噪,以利用时间信息并确保一致性。该网络进一步整合中心帧中的多个盲像素,以丰富局部空间信息的学习。此外,DeepVID v2引入了一个新的边缘提取分支来捕捉精细结构细节,以便学习高空间分辨率信息。

结果

我们证明,DeepVID v2能够克服空间和时间性能之间的权衡,并在解析高分辨率空间结构和快速时间神经元活动方面实现卓越的去噪能力。我们进一步表明,DeepVID v2能够推广到不同的成像条件,包括具有各种信噪比(SNR)的时间序列测量以及在极端低光子条件下。

结论

我们的结果强调了DeepVID v2作为增强电压成像的有前途的工具。该框架有潜力推广到其他低光子成像模式,并极大地促进大脑神经元活动的研究。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验