Nguyen Thanh, Xue Yujia, Li Yunzhe, Tian Lei, Nehmetallah George
Opt Express. 2018 Oct 1;26(20):26470-26484. doi: 10.1364/OE.26.026470.
Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.
卷积神经网络(CNN)在解决复杂的逆问题方面取得了巨大成功。这项工作的目的是开发一种新颖的CNN框架,以重建使用计算显微镜技术——傅里叶叠层显微镜(FPM)捕获的动态活细胞的视频序列。FPM的独特之处在于,它能够通过采集一系列低分辨率强度图像来重建具有宽视场(FOV)和高分辨率的图像,即大空间带宽积(SBP)。对于活细胞成像,单个FPM帧包含数千个具有不同形态特征的细胞样本。我们的想法是充分利用这些大型空间集合提供的统计信息,以便在顺序测量中进行预测,而无需使用任何额外的时间数据集。具体而言,我们表明,仅通过在时间序列实验开始时捕获的第一个FPM数据集上训练的CNN,就有可能重建高SBP动态细胞视频。我们的CNN方法仅用约25秒就能重建一幅12800×10800像素的相位图像,与基于模型的FPM算法相比,速度提高了50倍。此外,CNN还将每个时间帧所需的图像数量减少了约6倍。总体而言,这通过减少采集和计算时间,显著提高了成像通量。所提出的CNN基于条件生成对抗网络(cGAN)框架。我们进一步提出了一种混合损失函数,可以将标准图像域损失和加权傅里叶域损失相结合,从而改善高频信息的重建。此外,我们还利用迁移学习,以便对预训练的CNN进行进一步优化,以对其他细胞类型进行成像。我们的技术展示了一种很有前景的深度学习方法,可在较长时间内持续监测大量活细胞群体,并以亚细胞分辨率收集有用的空间和时间信息。