College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, 110169, China.
Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China.
Biomed Eng Online. 2021 Nov 18;20(1):112. doi: 10.1186/s12938-021-00950-z.
The rapid development of artificial intelligence technology has improved the capability of automatic breast cancer diagnosis, compared to traditional machine learning methods. Convolutional Neural Network (CNN) can automatically select high efficiency features, which helps to improve the level of computer-aided diagnosis (CAD). It can improve the performance of distinguishing benign and malignant breast ultrasound (BUS) tumor images, making rapid breast tumor screening possible.
The classification model was evaluated with a different dataset of 100 BUS tumor images (50 benign cases and 50 malignant cases), which was not used in network training. Evaluation indicators include accuracy, sensitivity, specificity, and area under curve (AUC) value. The results in the Fus2Net model had an accuracy of 92%, the sensitivity reached 95.65%, the specificity reached 88.89%, and the AUC value reached 0.97 for classifying BUS tumor images.
The experiment compared the existing CNN-categorized architecture, and the Fus2Net architecture we customed has more advantages in a comprehensive performance. The obtained results demonstrated that the Fus2Net classification method we proposed can better assist radiologists in the diagnosis of benign and malignant BUS tumor images.
The existing public datasets are small and the amount of data suffer from the balance issue. In this paper, we provide a relatively larger dataset with a total of 1052 ultrasound images, including 696 benign images and 356 malignant images, which were collected from a local hospital. We proposed a novel CNN named Fus2Net for the benign and malignant classification of BUS tumor images and it contains two self-designed feature extraction modules. To evaluate how the classifier generalizes on the experimental dataset, we employed the training set (646 benign cases and 306 malignant cases) for tenfold cross-validation. Meanwhile, to solve the balance of the dataset, the training data were augmented before being fed into the Fus2Net. In the experiment, we used hyperparameter fine-tuning and regularization technology to make the Fus2Net convergence.
与传统的机器学习方法相比,人工智能技术的快速发展提高了自动乳腺癌诊断的能力。卷积神经网络(CNN)可以自动选择高效的特征,有助于提高计算机辅助诊断(CAD)的水平。它可以提高鉴别良性和恶性乳腺超声(BUS)肿瘤图像的性能,实现快速的乳腺肿瘤筛查。
使用未用于网络训练的 100 个 BUS 肿瘤图像(50 个良性病例和 50 个恶性病例)的不同数据集评估分类模型。评估指标包括准确性、敏感性、特异性和曲线下面积(AUC)值。Fus2Net 模型的结果对 BUS 肿瘤图像的分类准确率为 92%,灵敏度达到 95.65%,特异性达到 88.89%,AUC 值为 0.97。
实验比较了现有的 CNN 分类架构,我们定制的 Fus2Net 架构在综合性能方面具有更多优势。实验结果表明,我们提出的 Fus2Net 分类方法可以更好地辅助放射科医生诊断良性和恶性 BUS 肿瘤图像。
现有的公共数据集较小,并且数据量存在平衡问题。在本文中,我们提供了一个相对较大的数据集,总共有 1052 个超声图像,包括 696 个良性图像和 356 个恶性图像,这些图像是从当地医院收集的。我们提出了一种新的 CNN,名为 Fus2Net,用于良性和恶性 BUS 肿瘤图像的分类,它包含两个自主设计的特征提取模块。为了评估分类器在实验数据集上的泛化能力,我们采用了训练集(646 个良性病例和 306 个恶性病例)进行十折交叉验证。同时,为了解决数据集的平衡问题,在将训练数据输入 Fus2Net 之前,对其进行了扩充。在实验中,我们使用超参数微调技术和正则化技术使 Fus2Net 收敛。