Ferreira Margarida R, Torres Helena R, Oliveira Bruno, de Araujo Augusto R V F, Morais Pedro, Novais Paulo, Vilaca Joao L
Annu Int Conf IEEE Eng Med Biol Soc. 2023 Jul;2023:1-4. doi: 10.1109/EMBC40787.2023.10340293.
Accurate lesion classification as benign or malignant in breast ultrasound (BUS) images is a critical task that requires experienced radiologists and has many challenges, such as poor image quality, artifacts, and high lesion variability. Thus, automatic lesion classification may aid professionals in breast cancer diagnosis. In this scope, computer-aided diagnosis systems have been proposed to assist in medical image interpretation, outperforming the intra and inter-observer variability. Recently, such systems using convolutional neural networks have demonstrated impressive results in medical image classification tasks. However, the lack of public benchmarks and a standardized evaluation method hampers the performance comparison of networks. This work is a benchmark for lesion classification in BUS images comparing six state-of-the-art networks: GoogLeNet, InceptionV3, ResNet, DenseNet, MobileNetV2, and EfficientNet. For each network, five input data variations that include segmentation information were tested to compare their impact on the final performance. The methods were trained on a multi-center BUS dataset (BUSI and UDIAT) and evaluated using the following metrics: precision, sensitivity, F1-score, accuracy, and area under the curve (AUC). Overall, the lesion with a thin border of background provides the best performance. For this input data, EfficientNet obtained the best results: an accuracy of 97.65% and an AUC of 96.30%.Clinical Relevance- This study showed the potential of deep neural networks to be used in clinical practice for breast lesion classification, also suggesting the best model choices.
在乳腺超声(BUS)图像中准确将病变分类为良性或恶性是一项关键任务,需要经验丰富的放射科医生,并且存在许多挑战,如图像质量差、伪影和病变高度变异性。因此,自动病变分类可能有助于专业人员进行乳腺癌诊断。在此范围内,已提出计算机辅助诊断系统来协助医学图像解读,其表现优于观察者内部和观察者之间的变异性。最近,此类使用卷积神经网络的系统在医学图像分类任务中取得了令人印象深刻的成果。然而,缺乏公共基准和标准化评估方法阻碍了网络性能的比较。这项工作是一个用于BUS图像中病变分类的基准,比较了六个最先进的网络:GoogLeNet、InceptionV3、ResNet、DenseNet、MobileNetV2和EfficientNet。对于每个网络,测试了包括分割信息在内的五种输入数据变化,以比较它们对最终性能的影响。这些方法在多中心BUS数据集(BUSI和UDIAT)上进行训练,并使用以下指标进行评估:精度、灵敏度、F1分数、准确率和曲线下面积(AUC)。总体而言,背景边界较细的病变提供了最佳性能。对于此输入数据,EfficientNet获得了最佳结果:准确率为97.65%,AUC为96.30%。临床相关性——本研究展示了深度神经网络在临床实践中用于乳腺病变分类的潜力,同时也提出了最佳模型选择。