基于分割的超声图像乳腺肿瘤 BI-RADS 集成分类。
Segmentation-based BI-RADS ensemble classification of breast tumours in ultrasound images.
机构信息
2(nd) Department of Radiology, Medical University of Gdansk, 17 Smoluchowskiego Str., Gdansk 80-214, Poland.
Department of Thoracic Radiology, Karolinska University Hospital, Anna Steckséns g 41, Solna 17176, Sweden.
出版信息
Int J Med Inform. 2024 Sep;189:105522. doi: 10.1016/j.ijmedinf.2024.105522. Epub 2024 Jun 6.
BACKGROUND
The development of computer-aided diagnosis systems in breast cancer imaging is exponential. Since 2016, 81 papers have described the automated segmentation of breast lesions in ultrasound images using artificial intelligence. However, only two papers have dealt with complex BI-RADS classifications.
PURPOSE
This study addresses the automatic classification of breast lesions into binary classes (benign vs. malignant) and multiple BI-RADS classes based on a single ultrasonographic image. Achieving this task should reduce the subjectivity of an individual operator's assessment.
MATERIALS AND METHODS
Automatic image segmentation methods (PraNet, CaraNet and FCBFormer) adapted to the specific segmentation task were investigated using the U-Net model as a reference. A new classification method was developed using an ensemble of selected segmentation approaches. All experiments were performed on publicly available BUS B, OASBUD, BUSI and private datasets.
RESULTS
FCBFormer achieved the best outcomes for the segmentation task with intersection over union metric values of 0.81, 0.80 and 0.73 and Dice values of 0.89, 0.87 and 0.82, respectively, for the BUS B, BUSI and OASBUD datasets. Through a series of experiments, we determined that adding an extra 30-pixel margin to the segmentation mask counteracts the potential errors introduced by the segmentation algorithm. An assembly of the full image classifier, bounding box classifier and masked image classifier was the most accurate for binary classification and had the best accuracy (ACC; 0.908), F1 (0.846) and area under the receiver operating characteristics curve (AUROC; 0.871) in the BUS B and ACC (0.982), F1 (0.984) and AUROC (0.998) in the UCC BUS datasets, outperforming each classifier used separately. It was also the most effective for BI-RADS classification, with ACC of 0.953, F1 of 0.920 and AUROC of 0.986 in UCC BUS. Hard voting was the most effective method for dichotomous classification. For the multi-class BI-RADS classification, the soft voting approach was employed.
CONCLUSIONS
The proposed new classification approach with an ensemble of segmentation and classification approaches proved more accurate than most published results for binary and multi-class BI-RADS classifications.
背景
计算机辅助诊断系统在乳腺癌成像领域的发展呈指数级增长。自 2016 年以来,已有 81 篇论文描述了使用人工智能对超声图像中的乳腺病变进行自动分割。然而,只有两篇论文涉及复杂的 BI-RADS 分类。
目的
本研究旨在基于单个超声图像自动将乳腺病变分为良性与恶性(二分类)和多个 BI-RADS 类别。实现这一任务可以减少个体操作者评估的主观性。
材料与方法
使用 U-Net 模型作为参考,研究了适用于特定分割任务的自动图像分割方法(PraNet、CaraNet 和 FCBFormer)。开发了一种新的分类方法,使用选择的分割方法的集成。所有实验均在公开的 BUS B、OASBUD、BUSI 和私有数据集上进行。
结果
FCBFormer 在分割任务中取得了最佳结果,在 BUS B、BUSI 和 OASBUD 数据集上的交并比(IoU)值分别为 0.81、0.80 和 0.73,Dice 值分别为 0.89、0.87 和 0.82。通过一系列实验,我们确定为分割掩模增加额外的 30 个像素可以抵消分割算法引入的潜在误差。全图分类器、边界框分类器和掩模图像分类器的组合在二分类中最为准确,在 BUS B 数据集上的准确率(ACC)为 0.908、F1 值为 0.846 和接收器操作特性曲线下的面积(AUROC)为 0.871,在 UCC BUS 数据集上的准确率(ACC)为 0.982、F1 值为 0.984 和 AUROC 为 0.998,均优于单独使用的每个分类器。它在 BI-RADS 分类中也最有效,在 UCC BUS 中的 ACC 为 0.953、F1 为 0.920 和 AUROC 为 0.986。硬投票是二分类最有效的方法。对于多类 BI-RADS 分类,采用软投票方法。
结论
提出的新分类方法,将分割和分类方法的集成,与大多数已发表的二分类和多类 BI-RADS 分类结果相比,证明更为准确。