Suppr超能文献

用于从微观尺度到宏观尺度进行高分辨率体积成像的实时通用网络。

Real-time and universal network for volumetric imaging from microscale to macroscale at high resolution.

作者信息

Lin Bingzhi, Xing Feng, Su Liwei, Wang Kekuan, Liu Yulan, Zhang Diming, Yang Xusan, Tan Huijun, Zhu Zhijing, Wang Depeng

机构信息

College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China.

Key Laboratory of Soybean Molecular Design Breeding, National Key Laboratory of Black Soils Conservation and Utilization, Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, Changchun, China.

出版信息

Light Sci Appl. 2025 Apr 29;14(1):178. doi: 10.1038/s41377-025-01842-w.

Abstract

Light-field imaging has wide applications in various domains, including microscale life science imaging, mesoscale neuroimaging, and macroscale fluid dynamics imaging. The development of deep learning-based reconstruction methods has greatly facilitated high-resolution light-field image processing, however, current deep learning-based light-field reconstruction methods have predominantly concentrated on the microscale. Considering the multiscale imaging capacity of light-field technique, a network that can work over variant scales of light-field image reconstruction will significantly benefit the development of volumetric imaging. Unfortunately, to our knowledge, no one has reported a universal high-resolution light-field image reconstruction algorithm that is compatible with microscale, mesoscale, and macroscale. To fill this gap, we present a real-time and universal network (RTU-Net) to reconstruct high-resolution light-field images at any scale. RTU-Net, as the first network that works over multiscale light-field image reconstruction, employs an adaptive loss function based on generative adversarial theory and consequently exhibits strong generalization capability. We comprehensively assessed the performance of RTU-Net through the reconstruction of multiscale light-field images, including microscale tubulin and mitochondrion dataset, mesoscale synthetic mouse neuro dataset, and macroscale light-field particle imaging velocimetry dataset. The results indicated that RTU-Net has achieved real-time and high-resolution light-field image reconstruction for volume sizes ranging from 300 μm × 300 μm × 12 μm to 25 mm × 25 mm × 25 mm, and demonstrated higher resolution when compared with recently reported light-field reconstruction networks. The high-resolution, strong robustness, high efficiency, and especially the general applicability of RTU-Net will significantly deepen our insight into high-resolution and volumetric imaging.

摘要

光场成像在各个领域都有广泛应用,包括微观尺度的生命科学成像、中观尺度的神经成像和宏观尺度的流体动力学成像。基于深度学习的重建方法的发展极大地促进了高分辨率光场图像处理,然而,当前基于深度学习的光场重建方法主要集中在微观尺度。考虑到光场技术的多尺度成像能力,一个能够在光场图像重建的不同尺度上工作的网络将对体积成像的发展有显著益处。不幸的是,据我们所知,还没有人报道过一种与微观、中观和宏观尺度兼容的通用高分辨率光场图像重建算法。为了填补这一空白,我们提出了一种实时通用网络(RTU-Net)来在任何尺度上重建高分辨率光场图像。RTU-Net作为首个在多尺度光场图像重建上工作的网络,采用了基于生成对抗理论的自适应损失函数,因此具有很强的泛化能力。我们通过重建多尺度光场图像全面评估了RTU-Net的性能,包括微观尺度的微管蛋白和线粒体数据集、中观尺度的合成小鼠神经数据集以及宏观尺度的光场粒子成像测速数据集。结果表明,RTU-Net已经实现了对体积大小从300μm×300μm×12μm到25mm×25mm×25mm的实时高分辨率光场图像重建,并且与最近报道的光场重建网络相比具有更高的分辨率。RTU-Net的高分辨率、强鲁棒性、高效率,尤其是其普遍适用性将显著加深我们对高分辨率和体积成像的理解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a1c3/12041240/96af157e4cae/41377_2025_1842_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验