Department of Computer Science and Engineering, Ulsan National Institute of Science and Technology (UNIST), South Korea.
Department of Health Sciences and Technology, Samsung Advanced Institute for Health Science and Technology, Sungkyunkwan University, South Korea; Institute for Refractory Cancer Research, Samsung Medical Center, South Korea.
Med Image Anal. 2021 May;70:101995. doi: 10.1016/j.media.2021.101995. Epub 2021 Feb 12.
In this paper, we propose a novel microscopy image translation method for transforming a bright-field microscopy image into three different fluorescence images to observe the apoptosis, nuclei, and cytoplasm of cells, which visualize dead cells, nuclei of cells, and cytoplasm of cells, respectively. These biomarkers are commonly used in high-content drug screening to analyze drug response. The main contribution of the proposed work is the automatic generation of three fluorescence images from a conventional bright-field image; this can greatly reduce the time-consuming and laborious tissue preparation process and improve throughput of the screening process. Our proposed method uses only a single bright-field image and the corresponding fluorescence images as a set of image pairs for training an end-to-end deep convolutional neural network. By leveraging deep convolutional neural networks with a set of image pairs of bright-field and corresponding fluorescence images, our proposed method can produce synthetic fluorescence images comparable to real fluorescence microscopy images with high accuracy. Our proposed model uses multi-task learning with adversarial losses to generate more accurate and realistic microscopy images. We assess the efficacy of the proposed method using real bright-field and fluorescence microscopy image datasets from patient-driven samples of a glioblastoma, and validate the method's accuracy with various quality metrics including cell number correlation (CNC), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), cell viability correlation (CVC), error maps, and R correlation.
在本文中,我们提出了一种新颖的显微镜图像翻译方法,用于将明场显微镜图像转换为三种不同的荧光图像,以观察细胞的凋亡、细胞核和细胞质,分别实现对死细胞、细胞细胞核和细胞质的可视化。这些生物标志物常用于高通量药物筛选中分析药物反应。本工作的主要贡献是从常规明场图像自动生成三种荧光图像;这可以大大减少耗时且繁琐的组织制备过程,提高筛选过程的通量。我们提出的方法仅使用单个明场图像和相应的荧光图像作为一组图像对进行端到端的深度卷积神经网络训练。通过利用深度卷积神经网络和一组明场和相应荧光图像的图像对,我们提出的方法可以以高精度生成与真实荧光显微镜图像相当的合成荧光图像。我们提出的模型使用具有对抗损失的多任务学习来生成更准确和逼真的显微镜图像。我们使用来自胶质母细胞瘤患者驱动样本的真实明场和荧光显微镜图像数据集来评估所提出方法的功效,并使用各种质量度量标准(包括细胞数量相关性 (CNC)、峰值信噪比 (PSNR)、结构相似性指数测量 (SSIM)、细胞活力相关性 (CVC)、误差图和 R 相关性)验证方法的准确性。