Suppr超能文献

卷积神经网络在乳腺 X 线摄影中用于自动肿块分割。

Convolutional neural network for automated mass segmentation in mammography.

机构信息

Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269, CT, USA.

The Informatics Research Institute (IRI), City of Scientific Research and Technological Applications (SRTA-City), Alexandria, Egypt.

出版信息

BMC Bioinformatics. 2020 Dec 9;21(Suppl 1):192. doi: 10.1186/s12859-020-3521-y.

Abstract

BACKGROUND

Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC).

RESULTS

We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively.

CONCLUSIONS

The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models.

摘要

背景

即使使用深度学习(DL)方法等先进方法,对乳腺 X 线摄影(MG)图像中的病变进行自动分割和定位也具有挑战性。我们开发了一种基于语义分割 U-Net 模型架构的新模型,用于精确分割 MG 图像中的肿块病变。所提出的端到端卷积神经网络(CNN)模型通过结合低水平和高水平特征来提取上下文信息。我们使用巨大的公共可用数据库(CBIS-DDSM、BCDR-01 和 INbreast)以及康涅狄格大学健康中心(UCHC)的私人数据库来训练所提出的模型。

结果

我们将所提出的模型的性能与最先进的 DL 模型(包括全卷积网络(FCN)、SegNet、Dilated-Net、原始 U-Net 和 Faster R-CNN 模型以及传统的区域生长(RG)方法)进行了比较。在所提出的 Vanilla U-Net 模型中,与 Faster R-CNN 模型相比,在运行时和交并比指标(IOU)方面表现出色。使用数字化胶片和全数字化 MG 图像进行训练,所提出的 Vanilla U-Net 模型的平均测试准确率达到 92.6%。所提出的模型的平均骰子系数指数(DI)为 0.951,平均交并比(IOU)为 0.909,这表明输出段与地面实况图中相应病变的接近程度。在我们的实验中,数据增强非常有效,导致平均 DI 和平均 IOU 分别从 0.922 增加到 0.951 和从 0.856 增加到 0.909。

结论

所提出的基于 Vanilla U-Net 的模型可用于精确分割 MG 图像中的肿块。这是因为分割过程结合了更多的多尺度空间上下文,并捕获了更多的局部和全局上下文,以预测输入全 MG 图像的精确像素级分割图。这些检测到的地图可以帮助放射科医生根据病变形状区分良性和恶性病变。我们表明,通过使用迁移学习、引入增强和修改原始模型的架构,与其他 DL 和传统模型相比,在检测肿块病变的平均准确率、平均 DI 和平均 IOU 方面可以获得更好的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0012/7724817/09262f8014f1/12859_2020_3521_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验