Liu Xin, Li Boyi, Liu Chengcheng, Ta Dean
Academy for Engineering and Technology, Fudan University, Shanghai, 200433 China.
State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, 200433 China.
Phenomics. 2023 Mar 2;3(4):408-420. doi: 10.1007/s43657-023-00094-1. eCollection 2023 Aug.
Fluorescence labeling and imaging provide an opportunity to observe the structure of biological tissues, playing a crucial role in the field of histopathology. However, when labeling and imaging biological tissues, there are still some challenges, e.g., time-consuming tissue preparation steps, expensive reagents, and signal bias due to photobleaching. To overcome these limitations, we present a deep-learning-based method for fluorescence translation of tissue sections, which is achieved by conditional generative adversarial network (cGAN). Experimental results from mouse kidney tissues demonstrate that the proposed method can predict the other types of fluorescence images from one raw fluorescence image, and implement the virtual multi-label fluorescent staining by merging the generated different fluorescence images as well. Moreover, this proposed method can also effectively reduce the time-consuming and laborious preparation in imaging processes, and further saves the cost and time.
The online version contains supplementary material available at 10.1007/s43657-023-00094-1.
荧光标记和成像为观察生物组织的结构提供了机会,在组织病理学领域发挥着至关重要的作用。然而,在对生物组织进行标记和成像时,仍然存在一些挑战,例如耗时的组织制备步骤、昂贵的试剂以及光漂白导致的信号偏差。为了克服这些限制,我们提出了一种基于深度学习的组织切片荧光转换方法,该方法通过条件生成对抗网络(cGAN)实现。来自小鼠肾脏组织的实验结果表明,所提出的方法可以从一张原始荧光图像预测出其他类型的荧光图像,并且通过合并生成的不同荧光图像也能够实现虚拟多标记荧光染色。此外,所提出的方法还可以有效减少成像过程中耗时费力的准备工作,进一步节省成本和时间。
在线版本包含可在10.1007/s43657-023-00094-1获取的补充材料。