Suppr超能文献

用于对比增强光谱乳腺钼靶图像中乳腺癌诊断的多视图多模态网络

Multiview multimodal network for breast cancer diagnosis in contrast-enhanced spectral mammography images.

作者信息

Song Jingqi, Zheng Yuanjie, Zakir Ullah Muhammad, Wang Junxia, Jiang Yanyun, Xu Chenxi, Zou Zhenxing, Ding Guocheng

机构信息

School of Information Science and Engineering, Shandong Normal University, Jinan, China.

Medical Imaging Department, Yantai Yuhuangding Hospital, Yantai, China.

出版信息

Int J Comput Assist Radiol Surg. 2021 Jun;16(6):979-988. doi: 10.1007/s11548-021-02391-4. Epub 2021 May 8.

Abstract

PURPOSE

CESM (contrast-enhanced spectral mammography) is an efficient tool for detecting breast cancer because of its image characteristics. However, among most deep learning-based methods for breast cancer classification, few models can integrate both its multiview and multimodal features. To effectively utilize the image features of CESM and thus help physicians to improve the accuracy of diagnosis, we propose a multiview multimodal network (MVMM-Net).

METHODS

The experiment is carried out to evaluate the in-house CESM images dataset taken from 95 patients aged 21-74 years with 760 images. The framework consists of three main stages: the input of the model, image feature extraction, and image classification. The first stage is to preprocess the CESM to utilize its multiview and multimodal features effectively. In the feature extraction stage, a deep learning-based network is used to extract CESM images features. The last stage is to integrate different features for classification using the MVMM-Net model.

RESULTS

According to the experimental results, the proposed method based on the Res2Net50 framework achieves an accuracy of 96.591%, sensitivity of 96.396%, specificity of 96.350%, precision of 96.833%, F1_score of 0.966, and AUC of 0.966 on the test set. Comparative experiments illustrate that the classification performance of the model can be improved by using multiview multimodal features.

CONCLUSION

We proposed a deep learning classification model that combines multiple features of CESM. The results of the experiment indicate that our method is more precise than the state-of-the-art methods and produces accurate results for the classification of CESM images.

摘要

目的

对比增强光谱乳腺摄影(CESM)因其图像特征,是检测乳腺癌的有效工具。然而,在大多数基于深度学习的乳腺癌分类方法中,很少有模型能够整合其多视图和多模态特征。为了有效利用CESM的图像特征,从而帮助医生提高诊断准确性,我们提出了一种多视图多模态网络(MVMM-Net)。

方法

实验采用了来自95名年龄在21 - 74岁患者的760张内部CESM图像数据集。该框架由三个主要阶段组成:模型输入、图像特征提取和图像分类。第一阶段是对CESM进行预处理,以有效利用其多视图和多模态特征。在特征提取阶段,使用基于深度学习的网络提取CESM图像特征。最后一个阶段是使用MVMM-Net模型整合不同特征进行分类。

结果

根据实验结果,基于Res2Net50框架提出的方法在测试集上的准确率为96.591%,灵敏度为96.396%,特异性为96.350%,精确率为96.833%,F1分数为0.966,AUC为0.966。对比实验表明,使用多视图多模态特征可以提高模型的分类性能。

结论

我们提出了一种结合CESM多种特征的深度学习分类模型。实验结果表明,我们的方法比现有方法更精确,能够为CESM图像分类产生准确的结果。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验