Suppr超能文献

使用UNet+/seg-cGAN模型对无标记分子振动图像进行图像到图像的转换,用于组织病理学检查。

Image-to-image translation of label-free molecular vibrational images for a histopathological review using the UNet+/seg-cGAN model.

作者信息

He Yunjie, Li Jiasong, Shen Steven, Liu Kai, Wong Kelvin K, He Tiancheng, Wong Stephen T C

机构信息

Translational Biophotonics Laboratory, Systems Medicine and Bioengineering Department, Houston Methodist Cancer Center, Houston, USA.

Pathology and Genome Medicine Department, Houston Methodist Hospital, Weill Cornell Medicine, Houston, USA.

出版信息

Biomed Opt Express. 2022 Mar 8;13(4):1924-1938. doi: 10.1364/BOE.445319. eCollection 2022 Apr 1.

Abstract

Translating images generated by label-free microscopy imaging, such as Coherent Anti-Stokes Raman Scattering (CARS), into more familiar clinical presentations of histopathological images will help the adoption of real-time, spectrally resolved label-free imaging in clinical diagnosis. Generative adversarial networks (GAN) have made great progress in image generation and translation, but have been criticized for lacking precision. In particular, GAN has often misinterpreted image information and identified incorrect content categories during image translation of microscopy scans. To alleviate this problem, we developed a new Pix2pix GAN model that simultaneously learns classifying contents in the images from a segmentation dataset during the image translation training. Our model integrates UNet+ with seg-cGAN, conditional generative adversarial networks with partial regularization of segmentation. Technical innovations of the UNet+/seg-cGAN model include: (1) replacing UNet with UNet+ as the Pix2pix cGAN's generator to enhance pattern extraction and richness of the gradient, and (2) applying the partial regularization strategy to train a part of the generator network as the segmentation sub-model on a separate segmentation dataset, thus enabling the model to identify correct content categories during image translation. The quality of histopathological-like images generated based on label-free CARS images has been improved significantly.

摘要

将诸如相干反斯托克斯拉曼散射(CARS)等无标记显微镜成像生成的图像转化为更常见的组织病理学图像临床呈现形式,将有助于在临床诊断中采用实时、光谱分辨的无标记成像技术。生成对抗网络(GAN)在图像生成和翻译方面取得了很大进展,但因其缺乏精确性而受到批评。特别是,GAN在显微镜扫描图像翻译过程中经常误解图像信息并识别错误的内容类别。为缓解这一问题,我们开发了一种新的Pix2pix GAN模型,该模型在图像翻译训练期间同时从分割数据集中学习对图像中的内容进行分类。我们的模型将UNet+与seg-cGAN(具有分割部分正则化的条件生成对抗网络)集成在一起。UNet+/seg-cGAN模型的技术创新包括:(1)用UNet+取代UNet作为Pix2pix cGAN的生成器,以增强模式提取和梯度的丰富性;(2)应用部分正则化策略在单独的分割数据集上训练生成器网络的一部分作为分割子模型,从而使模型在图像翻译过程中能够识别正确的内容类别。基于无标记CARS图像生成的类似组织病理学图像的质量得到了显著提高。

相似文献

本文引用的文献

6
Generative adversarial network in medical imaging: A review.生成对抗网络在医学影像中的应用:综述
Med Image Anal. 2019 Dec;58:101552. doi: 10.1016/j.media.2019.101552. Epub 2019 Aug 31.
7
Neural Style Transfer: A Review.神经风格迁移:综述。
IEEE Trans Vis Comput Graph. 2020 Nov;26(11):3365-3385. doi: 10.1109/TVCG.2019.2921336. Epub 2019 Jun 6.
9
On the Effectiveness of Least Squares Generative Adversarial Networks.最小二乘生成对抗网络的有效性。
IEEE Trans Pattern Anal Mach Intell. 2019 Dec;41(12):2947-2960. doi: 10.1109/TPAMI.2018.2872043. Epub 2018 Sep 24.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验