Suppr超能文献

基于弹性成像的协作学习实现 B 模式超声成像中乳腺癌的联合定位与分类。

Joint Localization and Classification of Breast Cancer in B-Mode Ultrasound Imaging via Collaborative Learning With Elastography.

出版信息

IEEE J Biomed Health Inform. 2022 Sep;26(9):4474-4485. doi: 10.1109/JBHI.2022.3186933. Epub 2022 Sep 9.

Abstract

Convolutional neural networks (CNNs) have been successfully applied in the computer-aided ultrasound diagnosis for breast cancer. Up to now, several CNN-based methods have been proposed. However, most of them consider tumor localization and classification as two separate steps, rather than performing them simultaneously. Besides, they suffer from the limited diagnosis information in the B-mode ultrasound (BUS) images. In this study, we develop a novel network ResNet-GAP that incorporates both localization and classification into a unified procedure. To enhance the performance of ResNet-GAP, we leverage stiffness information in the elastography ultrasound (EUS) modality by collaborative learning in the training stage. Specifically, a dual-channel ResNet-GAP network is developed, one channel for BUS and the other for EUS. In each channel, multiple class activity maps (CAMs) are generated using a series of convolutional kernels of different sizes. The multi-scale consistency of the CAMs in both channels are further considered in network optimization. Experiments on 264 patients in this study show that the newly developed ResNet-GAP achieves an accuracy of 88.6%, a sensitivity of 95.3%, a specificity of 84.6%, and an AUC of 93.6% on the classification task, and a 1.0NLF of 87.9% on the localization task, which is better than some state-of-the-art approaches.

摘要

卷积神经网络(CNNs)已成功应用于计算机辅助乳腺癌超声诊断。迄今为止,已经提出了几种基于 CNN 的方法。然而,它们中的大多数将肿瘤定位和分类视为两个独立的步骤,而不是同时进行。此外,它们受到 B 型超声(BUS)图像中有限的诊断信息的限制。在本研究中,我们开发了一种新的网络 ResNet-GAP,将定位和分类纳入一个统一的过程。为了提高 ResNet-GAP 的性能,我们利用弹性超声(EUS)模态中的刚度信息,通过在训练阶段进行协作学习。具体来说,开发了一个双通道 ResNet-GAP 网络,一个通道用于 BUS,另一个通道用于 EUS。在每个通道中,使用一系列不同大小的卷积核生成多个类激活图(CAM)。在网络优化中进一步考虑了两个通道中 CAM 的多尺度一致性。本研究对 264 名患者的实验表明,新开发的 ResNet-GAP 在分类任务上的准确率为 88.6%,灵敏度为 95.3%,特异性为 84.6%,AUC 为 93.6%,定位任务上的 1.0NLF 为 87.9%,优于一些最先进的方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验