School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China.
School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China.
Comput Methods Programs Biomed. 2022 Jun;221:106891. doi: 10.1016/j.cmpb.2022.106891. Epub 2022 May 14.
Automated breast ultrasound (ABUS) imaging technology has been widely used in clinical diagnosis. Accurate lesion segmentation in ABUS images is essential in computer-aided diagnosis (CAD) systems. Although deep learning-based approaches have been widely employed in medical image analysis, the large variety of lesions and the imaging interference make ABUS lesion segmentation challenging.
In this paper, we propose a novel deepest semantically guided multi-scale feature fusion network (DSGMFFN) for lesion segmentation in 2D ABUS slices. In order to cope with the large variety of lesions, a deepest semantically guided decoder (DSGNet) and a multi-scale feature fusion model (MFFM) are designed, where the deepest semantics is fully utilized to guide the decoding and feature fusion. That is, the deepest information is given the highest weight in the feature fusion process, and participates in every decoding stage. Aiming at the challenge of imaging interference, a novel mixed attention mechanism is developed, integrating spatial self-attention and channel self-attention to obtain the correlation among pixels and channels to highlight the lesion region.
The proposed DSGMFFN is evaluated on 3742 slices of 170 ABUS volumes. The experimental result indicates that DSGMFFN achieves 84.54% and 73.24% in Dice similarity coefficient (DSC) and intersection over union (IoU), respectively.
The proposed method shows better performance than the state-of-the-art methods in ABUS lesion segmentation. Incorrect segmentation caused by lesion variety and imaging interference in ABUS images can be alleviated.
自动乳腺超声(ABUS)成像技术已广泛应用于临床诊断。在计算机辅助诊断(CAD)系统中,ABUS 图像中准确的病灶分割至关重要。尽管基于深度学习的方法已广泛应用于医学图像分析,但由于病变种类繁多和成像干扰,ABUS 病变分割仍然具有挑战性。
本文提出了一种新颖的基于最深语义引导的多尺度特征融合网络(DSGMFFN),用于 2D ABUS 切片中的病灶分割。为了应对病变种类繁多的问题,设计了最深语义引导解码器(DSGNet)和多尺度特征融合模型(MFFM),充分利用最深语义来指导解码和特征融合。即,在特征融合过程中给予最深信息最高权重,并参与每个解码阶段。针对成像干扰的挑战,提出了一种新的混合注意力机制,集成了空间自注意力和通道自注意力,以获取像素和通道之间的相关性,突出病灶区域。
在 170 个 ABUS 容积的 3742 个切片上对所提出的 DSGMFFN 进行了评估。实验结果表明,DSGMFFN 在 Dice 相似系数(DSC)和交并比(IoU)上分别达到了 84.54%和 73.24%。
所提出的方法在 ABUS 病灶分割方面优于最新方法,能够减轻 ABUS 图像中由于病灶种类和成像干扰导致的错误分割。