Li Xinyang, Zhang Guoxun, Qiao Hui, Bao Feng, Deng Yue, Wu Jiamin, He Yangfan, Yun Jingping, Lin Xing, Xie Hao, Wang Haoqian, Dai Qionghai
Department of Automation, Tsinghua University, Beijing, 100084, China.
Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
Light Sci Appl. 2021 Mar 1;10(1):44. doi: 10.1038/s41377-021-00484-y.
The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.
深度学习的发展以及对大量成像数据的开放获取共同为计算图像变换提供了一种潜在的解决方案,这正在逐渐改变光学成像和生物医学研究的局面。然而,当前深度学习的实现通常以监督方式进行操作,并且它们对费力且容易出错的数据标注过程的依赖仍然是其更广泛应用的障碍。在此,我们提出一种无监督图像变换,以促进深度学习在光学显微镜中的应用,即使在某些无法应用监督模型的情况下也是如此。通过引入显著性约束,这个名为光学显微镜无监督内容保留变换(UTOM)的无监督模型可以学习两个图像域之间的映射,而无需成对的训练数据,同时避免图像内容的失真。UTOM在广泛的生物医学图像变换任务中表现出了良好的性能,包括虚拟组织学染色、荧光图像恢复和虚拟荧光标记。定量评估表明,UTOM在不同的成像条件和模态下都能实现稳定且高保真的图像变换。我们预计,我们的框架将促使训练神经网络的范式转变,并使人工智能在生物医学成像中有更多应用。