Suppr超能文献

基于注意力的融合网络在多模态超声图像乳腺癌分割与分类中的应用

Attention-based Fusion Network for Breast Cancer Segmentation and Classification Using Multi-modal Ultrasound Images.

作者信息

Cho Yoonjae, Misra Sampa, Managuli Ravi, Barr Richard G, Lee Jeongmin, Kim Chulhong

机构信息

Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology, Pohang, Republic of Korea.

Department of Bioengineering, University of Washington, Seattle, USA.

出版信息

Ultrasound Med Biol. 2025 Mar;51(3):568-577. doi: 10.1016/j.ultrasmedbio.2024.11.020. Epub 2024 Dec 17.

Abstract

OBJECTIVE

Breast cancer is one of the most commonly occurring cancers in women. Thus, early detection and treatment of cancer lead to a better outcome for the patient. Ultrasound (US) imaging plays a crucial role in the early detection of breast cancer, providing a cost-effective, convenient, and safe diagnostic approach. To date, much research has been conducted to facilitate reliable and effective early diagnosis of breast cancer through US image analysis. Recently, with the introduction of machine learning technologies such as deep learning (DL), automated lesion segmentation and classification, the identification of malignant masses in US breasts has progressed, and computer-aided diagnosis (CAD) technology is being applied in clinics effectively. Herein, we propose a novel deep learning-based "segmentation + classification" model based on B- and SE-mode images.

METHODS

For the segmentation task, we propose a Multi-Modal Fusion U-Net (MMF-U-Net), which segments lesions by mixing B- and SE-mode information through fusion blocks. After segmenting, the lesion area from the B- and SE-mode images is cropped using a predicted segmentation mask. The encoder part of the pre-trained MMF-U-Net model is then used on the cropped B- and SE-mode breast US images to classify benign and malignant lesions.

RESULTS

The experimental results using the proposed method showed good segmentation and classification scores. The dice score, intersection over union (IoU), precision, and recall are 78.23%, 68.60%, 82.21%, and 80.58%, respectively, using the proposed MMF-U-Net on real-world clinical data. The classification accuracy is 98.46%.

CONCLUSION

Our results show that the proposed method will effectively segment the breast lesion area and can reliably classify the benign from malignant lesions.

摘要

目的

乳腺癌是女性中最常见的癌症之一。因此,癌症的早期检测和治疗能为患者带来更好的治疗效果。超声(US)成像在乳腺癌的早期检测中起着至关重要的作用,提供了一种经济高效、方便且安全的诊断方法。迄今为止,已经进行了大量研究,以通过超声图像分析促进乳腺癌的可靠和有效早期诊断。最近,随着深度学习(DL)等机器学习技术的引入,超声乳腺中恶性肿块的自动病变分割和分类取得了进展,计算机辅助诊断(CAD)技术也在临床上得到了有效应用。在此,我们提出了一种基于B模式和SE模式图像的新型深度学习“分割+分类”模型。

方法

对于分割任务,我们提出了一种多模态融合U-Net(MMF-U-Net),它通过融合块混合B模式和SE模式信息来分割病变。分割后,使用预测的分割掩码裁剪B模式和SE模式图像中的病变区域。然后,将预训练MMF-U-Net模型的编码器部分用于裁剪后的B模式和SE模式乳腺超声图像,以对良性和恶性病变进行分类。

结果

使用所提出方法的实验结果显示出良好的分割和分类分数。在真实世界临床数据上使用所提出的MMF-U-Net时,骰子系数、交并比(IoU)、精度和召回率分别为78.23%、68.60%、82.21%和80.58%。分类准确率为98.46%。

结论

我们的结果表明,所提出的方法将有效地分割乳腺病变区域,并能可靠地将良性病变与恶性病变区分开来。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验