School of Information Science and Engineering, Shandong Normal University, Jinan, China.
Key Lab of Intelligent Computing & Information Security in Universities of Shandong, Shandong Provincial Key Laboratory for Novel Distributed Computer Software Technology, Institute of Biomedical Sciences, Shandong Normal University, Jinan, China.
Med Phys. 2022 Feb;49(2):966-977. doi: 10.1002/mp.15390. Epub 2021 Dec 15.
Contrast-enhanced spectral mammography (CESM) is an effective tool for diagnosing breast cancer with the benefit of its multiple types of images. However, few models simultaneously utilize this feature in deep learning-based breast cancer classification methods. To combine multiple features of CESM and thus aid physicians in making accurate diagnoses, we propose a hybrid approach by taking advantages of both fusion and classification models.
We evaluated the proposed method on a CESM dataset obtained from 95 patients between ages ranging from 21 to 74 years, with a total of 760 images. The framework consists of two main parts: a generative adversarial network based image fusion module and a Res2Net-based classification module. The aim of the fusion module is to generate a fused image that combines the characteristics of dual-energy subtracted (DES) and low-energy (LE) images, and the classification module is developed to classify the fused image into benign or malignant.
Based on the experimental results, the fused images contained complementary information of the images of both types (DES and LE), whereas the model for classification achieved accurate classification results. In terms of qualitative indicators, the entropy of the fused images was 2.63, and the classification model achieved an accuracy of 94.784%, precision of 95.016%, recall of 95.912%, specificity of 0.945, F1_score of 0.955, and area under curve of 0.947 on the test dataset, respectively.
We conducted extensive comparative experiments and analyses on our in-house dataset, and demonstrated that our method produces promising results in the fusion of CESM images and is more accurate than the state-of-the-art methods in classification of fused CESM.
对比增强光谱乳腺摄影术(CESM)是一种有效的乳腺癌诊断工具,具有多种图像类型的优势。然而,在基于深度学习的乳腺癌分类方法中,很少有模型同时利用这一特性。为了结合 CESM 的多种特征,帮助医生做出准确诊断,我们提出了一种混合方法,利用融合和分类模型的优势。
我们在一个来自 95 名年龄在 21 至 74 岁之间的患者的 CESM 数据集上评估了所提出的方法,该数据集共有 760 张图像。该框架由两个主要部分组成:基于生成对抗网络的图像融合模块和基于 Res2Net 的分类模块。融合模块的目的是生成一张融合图像,该图像融合了双能减影(DES)和低能(LE)图像的特征,分类模块则用于将融合图像分类为良性或恶性。
基于实验结果,融合图像包含了两种类型(DES 和 LE)图像的互补信息,而分类模型则实现了准确的分类结果。在定性指标方面,融合图像的熵为 2.63,分类模型在测试数据集上的准确率为 94.784%,精度为 95.016%,召回率为 95.912%,特异性为 0.945,F1 分数为 0.955,曲线下面积为 0.947。
我们在内部数据集上进行了广泛的对比实验和分析,证明了我们的方法在 CESM 图像融合方面产生了有前景的结果,并且在融合 CESM 的分类方面比最先进的方法更准确。