Graduate School of Science and Technology, University of Tsukuba, Tsukuba, Japan.
National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan.
PLoS One. 2022 Aug 11;17(8):e0271106. doi: 10.1371/journal.pone.0271106. eCollection 2022.
Deep learning techniques have achieved remarkable success in lesion segmentation and classification between benign and malignant tumors in breast ultrasound images. However, existing studies are predominantly focused on devising efficient neural network-based learning structures to tackle specific tasks individually. By contrast, in clinical practice, sonographers perform segmentation and classification as a whole; they investigate the border contours of the tissue while detecting abnormal masses and performing diagnostic analysis. Performing multiple cognitive tasks simultaneously in this manner facilitates exploitation of the commonalities and differences between tasks. Inspired by this unified recognition process, this study proposes a novel learning scheme, called the cross-task guided network (CTG-Net), for efficient ultrasound breast image understanding. CTG-Net integrates the two most significant tasks in computerized breast lesion pattern investigation: lesion segmentation and tumor classification. Further, it enables the learning of efficient feature representations across tasks from ultrasound images and the task-specific discriminative features that can greatly facilitate lesion detection. This is achieved using task-specific attention models to share the prediction results between tasks. Then, following the guidance of task-specific attention soft masks, the joint feature responses are efficiently calibrated through iterative model training. Finally, a simple feature fusion scheme is used to aggregate the attention-guided features for efficient ultrasound pattern analysis. We performed extensive experimental comparisons on multiple ultrasound datasets. Compared to state-of-the-art multi-task learning approaches, the proposed approach can improve the Dice's coefficient, true-positive rate of segmentation, AUC, and sensitivity of classification by 11%, 17%, 2%, and 6%, respectively. The results demonstrate that the proposed cross-task guided feature learning framework can effectively fuse the complementary information of ultrasound image segmentation and classification tasks to achieve accurate tumor localization. Thus, it can aid sonographers to detect and diagnose breast cancer.
深度学习技术在乳腺超声图像中良性和恶性肿瘤的病变分割和分类方面取得了显著的成功。然而,现有的研究主要集中在设计高效的基于神经网络的学习结构来单独解决特定任务上。相比之下,在临床实践中,超声医师会整体地进行分割和分类;他们在检测异常肿块和进行诊断分析的同时,研究组织的边界轮廓。以这种方式同时执行多个认知任务可以利用任务之间的共性和差异。受这种统一识别过程的启发,本研究提出了一种名为跨任务引导网络(CTG-Net)的新学习方案,用于有效地进行超声乳腺图像理解。CTG-Net 集成了计算机化乳腺病变模式研究中两个最重要的任务:病变分割和肿瘤分类。此外,它还能够从超声图像中学习到高效的跨任务特征表示,以及任务特定的鉴别特征,这些特征可以极大地促进病变检测。这是通过使用任务特定的注意力模型在任务之间共享预测结果来实现的。然后,根据任务特定的注意力软掩模的指导,通过迭代模型训练有效地校准联合特征响应。最后,使用简单的特征融合方案来聚合注意力引导的特征,以进行有效的超声模式分析。我们在多个超声数据集上进行了广泛的实验比较。与最先进的多任务学习方法相比,所提出的方法可以分别将 Dice 系数、分割的真阳性率、AUC 和分类的灵敏度提高 11%、17%、2%和 6%。结果表明,所提出的跨任务引导特征学习框架可以有效地融合超声图像分割和分类任务的互补信息,实现肿瘤的准确定位。因此,它可以帮助超声医师检测和诊断乳腺癌。