He Yunjie, Li Jiasong, Shen Steven, Liu Kai, Wong Kelvin K, He Tiancheng, Wong Stephen T C
Translational Biophotonics Laboratory, Systems Medicine and Bioengineering Department, Houston Methodist Cancer Center, Houston, USA.
Pathology and Genome Medicine Department, Houston Methodist Hospital, Weill Cornell Medicine, Houston, USA.
Biomed Opt Express. 2022 Mar 8;13(4):1924-1938. doi: 10.1364/BOE.445319. eCollection 2022 Apr 1.
Translating images generated by label-free microscopy imaging, such as Coherent Anti-Stokes Raman Scattering (CARS), into more familiar clinical presentations of histopathological images will help the adoption of real-time, spectrally resolved label-free imaging in clinical diagnosis. Generative adversarial networks (GAN) have made great progress in image generation and translation, but have been criticized for lacking precision. In particular, GAN has often misinterpreted image information and identified incorrect content categories during image translation of microscopy scans. To alleviate this problem, we developed a new Pix2pix GAN model that simultaneously learns classifying contents in the images from a segmentation dataset during the image translation training. Our model integrates UNet+ with seg-cGAN, conditional generative adversarial networks with partial regularization of segmentation. Technical innovations of the UNet+/seg-cGAN model include: (1) replacing UNet with UNet+ as the Pix2pix cGAN's generator to enhance pattern extraction and richness of the gradient, and (2) applying the partial regularization strategy to train a part of the generator network as the segmentation sub-model on a separate segmentation dataset, thus enabling the model to identify correct content categories during image translation. The quality of histopathological-like images generated based on label-free CARS images has been improved significantly.
将诸如相干反斯托克斯拉曼散射(CARS)等无标记显微镜成像生成的图像转化为更常见的组织病理学图像临床呈现形式,将有助于在临床诊断中采用实时、光谱分辨的无标记成像技术。生成对抗网络(GAN)在图像生成和翻译方面取得了很大进展,但因其缺乏精确性而受到批评。特别是,GAN在显微镜扫描图像翻译过程中经常误解图像信息并识别错误的内容类别。为缓解这一问题,我们开发了一种新的Pix2pix GAN模型,该模型在图像翻译训练期间同时从分割数据集中学习对图像中的内容进行分类。我们的模型将UNet+与seg-cGAN(具有分割部分正则化的条件生成对抗网络)集成在一起。UNet+/seg-cGAN模型的技术创新包括:(1)用UNet+取代UNet作为Pix2pix cGAN的生成器,以增强模式提取和梯度的丰富性;(2)应用部分正则化策略在单独的分割数据集上训练生成器网络的一部分作为分割子模型,从而使模型在图像翻译过程中能够识别正确的内容类别。基于无标记CARS图像生成的类似组织病理学图像的质量得到了显著提高。