Suppr超能文献

基于卷积神经网络的多参数 MRI 前列腺肿瘤分割的可解释人工智能与全组织病理相关。

Explainable AI for CNN-based prostate tumor segmentation in multi-parametric MRI correlated to whole mount histopathology.

机构信息

Department of Radiology, Medical Physics, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.

German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany.

出版信息

Radiat Oncol. 2022 Apr 2;17(1):65. doi: 10.1186/s13014-022-02035-0.

Abstract

Automatic prostate tumor segmentation is often unable to identify the lesion even if multi-parametric MRI data is used as input, and the segmentation output is difficult to verify due to the lack of clinically established ground truth images. In this work we use an explainable deep learning model to interpret the predictions of a convolutional neural network (CNN) for prostate tumor segmentation. The CNN uses a U-Net architecture which was trained on multi-parametric MRI data from 122 patients to automatically segment the prostate gland and prostate tumor lesions. In addition, co-registered ground truth data from whole mount histopathology images were available in 15 patients that were used as a test set during CNN testing. To be able to interpret the segmentation results of the CNN, heat maps were generated using the Gradient Weighted Class Activation Map (Grad-CAM) method. The CNN achieved a mean Dice Sorensen Coefficient 0.62 and 0.31 for the prostate gland and the tumor lesions -with the radiologist drawn ground truth and 0.32 with whole-mount histology ground truth for tumor lesions. Dice Sorensen Coefficient between CNN predictions and manual segmentations from MRI and histology data were not significantly different. In the prostate the Grad-CAM heat maps could differentiate between tumor and healthy prostate tissue, which indicates that the image information in the tumor was essential for the CNN segmentation.

摘要

自动前列腺肿瘤分割即使使用多参数 MRI 数据作为输入,也常常无法识别病变,并且由于缺乏临床确立的真实图像,分割输出难以验证。在这项工作中,我们使用了一种可解释的深度学习模型来解释卷积神经网络 (CNN) 对前列腺肿瘤分割的预测。该 CNN 使用 U-Net 架构,该架构基于来自 122 名患者的多参数 MRI 数据进行训练,以自动分割前列腺腺体和前列腺肿瘤病变。此外,在 15 名患者中还提供了全载玻片组织病理学图像的配准的真实数据,这些数据在 CNN 测试期间被用作测试集。为了能够解释 CNN 的分割结果,使用梯度加权类激活映射 (Grad-CAM) 方法生成热图。该 CNN 针对前列腺和肿瘤病变,在与放射科医生绘制的真实数据和全载玻片组织学真实数据进行比较时,分别达到了 0.62 和 0.31 的平均 Dice Sorensen 系数 - 0.32 用于肿瘤病变。CNN 预测值与 MRI 和组织学数据的手动分割之间的 Dice Sorensen 系数没有显著差异。在前列腺中,Grad-CAM 热图可以区分肿瘤和健康的前列腺组织,这表明肿瘤中的图像信息对 CNN 分割至关重要。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ab1/8976981/14047c8f6b59/13014_2022_2035_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验