Suppr超能文献

通过递归神经网络实现荧光显微镜的时空超分辨率。

Spatial and temporal super-resolution for fluorescence microscopy by a recurrent neural network.

出版信息

Opt Express. 2021 May 10;29(10):15747-15763. doi: 10.1364/OE.423892.

Abstract

A novel spatial and temporal super-resolution (SR) framework based on a recurrent neural network (RNN) is demonstrated. In this work, we learn the complex yet useful features from the temporal data by taking advantage of structural characteristics of RNN and a skip connection. The usage of supervision mechanism is not only making full use of the intermediate output of each recurrent layer to recover the final output, but also alleviating vanishing/exploding gradients during the back-propagation. The proposed scheme achieves excellent reconstruction results, improving both the spatial and temporal resolution of fluorescence images including the simulated and real tubulin datasets. Besides, robustness against various critical metrics, such as the full-width at half-maximum (FWHM) and molecular density, can also be incorporated. In the validation, the performance can be increased by more than 20% for intensity profile, and 8% for FWHM, and the running time can be saved at least 40% compared with the classic Deep-STORM method, a high-performance net which is popularly used for comparison.

摘要

提出了一种基于循环神经网络(RNN)的新型时空超分辨率(SR)框架。在这项工作中,我们通过利用 RNN 的结构特性和跳过连接,从时间数据中学习复杂但有用的特征。使用监督机制不仅充分利用每个循环层的中间输出来恢复最终输出,还可以缓解反向传播过程中的梯度消失/爆炸问题。所提出的方案实现了出色的重建结果,提高了荧光图像的空间和时间分辨率,包括模拟和真实微管数据集。此外,还可以加入对各种关键指标(如半峰全宽(FWHM)和分子密度)的稳健性。在验证中,与广泛用于比较的高性能网络 Deep-STORM 方法相比,强度轮廓的性能可以提高 20%以上,FWHM 可以提高 8%,运行时间可以至少节省 40%。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验