IEEE Trans Med Imaging. 2020 Jun;39(6):2256-2266. doi: 10.1109/TMI.2020.2968504. Epub 2020 Jan 21.
Visualizing the details of different cellular structures is of great importance to elucidate cellular functions. However, it is challenging to obtain high quality images of different structures directly due to complex cellular environments. Fluorescence staining is a popular technique to label different structures but has several drawbacks. In particular, label staining is time consuming and may affect cell morphology, and simultaneous labels are inherently limited. This raises the need of building computational models to learn relationships between unlabeled microscopy images and labeled fluorescence images, and to infer fluorescence labels of other microscopy images excluding the physical staining process. We propose to develop a novel deep model for virtual staining of unlabeled microscopy images. We first propose a novel network layer, known as the global pixel transformer layer, that fuses global information from inputs effectively. The proposed global pixel transformer layer can generate outputs with arbitrary dimensions, and can be employed for all the regular, down-sampling, and up-sampling operators. We then incorporate our proposed global pixel transformer layers and dense blocks to build an U-Net like network. We believe such a design can promote feature reusing between layers. In addition, we propose a multi-scale input strategy to encourage networks to capture features at different scales. We conduct evaluations across various fluorescence image prediction tasks to demonstrate the effectiveness of our approach. Both quantitative and qualitative results show that our method outperforms the state-of-the-art approach significantly. It is also shown that our proposed global pixel transformer layer is useful to improve the fluorescence image prediction results.
可视化不同细胞结构的细节对于阐明细胞功能非常重要。然而,由于复杂的细胞环境,直接获得高质量的不同结构图像具有挑战性。荧光染色是标记不同结构的一种常用技术,但存在几个缺点。特别是,标记染色耗时且可能影响细胞形态,并且同时标记本质上是有限的。这就需要建立计算模型来学习未标记显微镜图像和标记荧光图像之间的关系,并推断其他显微镜图像的荧光标记,而无需进行物理染色过程。我们建议开发一种用于未标记显微镜图像虚拟染色的新的深度模型。我们首先提出了一种新颖的网络层,称为全局像素变换层,该层可以有效地融合来自输入的全局信息。所提出的全局像素变换层可以生成任意维度的输出,并可用于所有常规、下采样和上采样操作。然后,我们将我们提出的全局像素变换层和密集块结合起来构建一个类似于 U-Net 的网络。我们相信这种设计可以促进层之间的特征重用。此外,我们提出了一种多尺度输入策略,以鼓励网络在不同尺度上捕获特征。我们在各种荧光图像预测任务中进行评估,以证明我们方法的有效性。定量和定性结果均表明,我们的方法明显优于最先进的方法。还表明,我们提出的全局像素变换层有助于提高荧光图像预测结果。