Suppr超能文献

用于乳腺癌组织病理学图像分析的超分辨率和分割深度学习

Super-resolution and segmentation deep learning for breast cancer histopathology image analysis.

作者信息

Juhong Aniwat, Li Bo, Yao Cheng-You, Yang Chia-Wei, Agnew Dalen W, Lei Yu Leo, Huang Xuefei, Piyawattanametha Wibool, Qiu Zhen

机构信息

Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823, USA.

Institute for Quantitative Health Science and Engineering, Michigan State University, East Lansing, MI 48823, USA.

出版信息

Biomed Opt Express. 2022 Dec 5;14(1):18-36. doi: 10.1364/BOE.463839. eCollection 2023 Jan 1.

Abstract

Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images' size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model's results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings.

摘要

传统上,需要一台具有大数值孔径的高性能显微镜来获取高分辨率图像。然而,这些图像的尺寸通常非常大。因此,它们在计算机网络中不方便管理和传输,也难以存储在有限的计算机存储系统中。结果,图像压缩通常被用于减小图像尺寸,但这会导致图像分辨率变差。在此,我们展示了定制卷积神经网络(CNN),通过使用生成器和判别器网络的组合,即基于聚合残差变换的超分辨率生成对抗网络(SRGAN-ResNeXt),用于从低分辨率图像进行超分辨率图像增强以及对苏木精和伊红(H&E)染色的乳腺癌组织病理学图像中的细胞和细胞核进行表征,以促进在资源有限的环境中进行癌症诊断。结果表明,我们网络的峰值信噪比和结构相似性分别超过30 dB和0.93,图像质量得到了显著提高。所获得的性能优于双立方插值和著名的SRGAN深度学习方法的结果。此外,另一个定制CNN用于对我们模型生成的高分辨率乳腺癌图像进行图像分割,对于H&E图像分割结果,平均交并比为0.869,平均骰子相似系数为0.893。最后,我们提出了联合训练的SRGAN-ResNeXt和Inception U-net模型,它们将单独训练的SRGAN-ResNeXt和Inception U-net模型的权重用作迁移学习的预训练权重。联合训练模型的结果不断改进且前景乐观。我们预计这些定制CNN能够帮助解决在偏远受限环境中,因无法使用先进显微镜或全玻片成像(WSI)系统而难以从低性能显微镜获取高分辨率图像的问题。

相似文献

引用本文的文献

4
Super-resolution techniques for biomedical applications and challenges.用于生物医学应用的超分辨率技术及挑战。
Biomed Eng Lett. 2024 Mar 19;14(3):465-496. doi: 10.1007/s13534-024-00365-4. eCollection 2024 May.

本文引用的文献

7
Convolutional neural networks for whole slide image superresolution.用于全切片图像超分辨率的卷积神经网络。
Biomed Opt Express. 2018 Oct 12;9(11):5368-5386. doi: 10.1364/BOE.9.005368. eCollection 2018 Nov 1.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验