Huang Luzhe, Chen Hanlong, Luo Yilin, Rivenson Yair, Ozcan Aydogan
Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
Light Sci Appl. 2021 Mar 23;10(1):62. doi: 10.1038/s41377-021-00506-9.
Volumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input, matching confocal microscopy images of the same sample volume. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.
使用荧光显微镜对样品进行体积成像在包括物理、医学和生命科学在内的各个领域都发挥着重要作用。在此,我们报告了一种基于深度学习的体积图像推理框架,该框架使用标准宽视场荧光显微镜在样品体积内的任意轴向位置稀疏捕获的二维图像。通过一个我们称为Recurrent-MZ的循环卷积神经网络,明确纳入样品内几个轴向平面的二维荧光信息,以在扩展的景深上数字重建样品体积。通过对线虫和纳米珠样品的实验,证明Recurrent-MZ可显著增加63×/1.4NA物镜的景深,同时将对相同样品体积成像所需的轴向扫描次数减少30倍。我们还通过展示其对变化的成像条件(包括例如不同的输入图像序列、涵盖各种轴向排列和未知轴向定位误差)的适应性,进一步说明了这种循环网络在三维成像中的通用性。我们还使用Recurrent-MZ框架演示了宽视场到共聚焦的跨模态图像转换,并使用几张宽视场二维荧光图像作为输入对样品进行了三维图像重建,与相同样品体积的共聚焦显微镜图像相匹配。Recurrent-MZ展示了循环神经网络在微观图像重建中的首次应用,并提供了一个灵活且快速的体积成像框架,克服了当前三维扫描显微镜工具的局限性。