Suppr超能文献

HMA-Net:一种用于乳腺超声图像分割的具有多头注意力机制的混合混合器框架。

HMA-Net: a hybrid mixer framework with multihead attention for breast ultrasound image segmentation.

作者信息

Sara Koshy Soumya, Anbarasi L Jani

机构信息

School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, India.

出版信息

Front Artif Intell. 2025 Jun 18;8:1572433. doi: 10.3389/frai.2025.1572433. eCollection 2025.

Abstract

INTRODUCTION

Breast cancer is a severe illness predominantly affecting women, and in most cases, it leads to loss of life if left undetected. Early detection can significantly reduce the mortality rate associated with breast cancer. Ultrasound imaging has been widely used for effectively detecting the disease, and segmenting breast ultrasound images aid in the identification and localization of tumors, thereby enhancing disease detection accuracy. Numerous computer-aided methods have been proposed for the segmentation of breast ultrasound images.

METHODS

A deep learning-based architecture utilizing a ConvMixer-based encoder and ConvNeXT-based decoder coupled with convolution-enhanced multihead attention has been proposed for segmenting breast ultrasound images. The enhanced ConvMixer modules utilize spatial filtering and channel-wise integration to efficiently capture local and global contextual features, enhancing feature relevance and thus increasing segmentation accuracy through dynamic channel recalibration and residual connections. The bottleneck with the attention mechanism enhances segmentation by utilizing multihead attention to capture long-range dependencies, thus enabling the model to focus on relevant features across distinct regions. The enhanced ConvNeXT modules with squeeze and excitation utilize depthwise convolution for efficient spatial filtering, layer normalization for stabilizing training, and residual connections to ensure the preservation of relevant features for accurate segmentation. A combined loss function, integrating binary cross entropy and dice loss, is used to train the model.

RESULTS

The proposed model has an exceptional performance in segmenting intricate structures, as confirmed by comprehensive experiments conducted on two datasets, namely the breast ultrasound image dataset (BUSI) dataset and the BrEaST dataset of breast ultrasound images. The model achieved a Jaccard index of 98.04% and 94.84% and a Dice similarity coefficient of 99.01% and 97.35% on the BUSI and BrEaST datasets, respectively.

DISCUSSION

The ConvMixer and ConvNeXT modules are integrated with convolution-enhanced multihead attention, which enhances the model's ability to capture local and global contextual information. The strong performance of the model on the BUSI and BrEaST datasets demonstrates the robustness and generalization capability of the model.

摘要

引言

乳腺癌是一种主要影响女性的严重疾病,在大多数情况下,如果未被发现,会导致死亡。早期检测可显著降低与乳腺癌相关的死亡率。超声成像已被广泛用于有效检测该疾病,而对乳腺超声图像进行分割有助于肿瘤的识别和定位,从而提高疾病检测的准确性。已经提出了许多用于乳腺超声图像分割的计算机辅助方法。

方法

提出了一种基于深度学习的架构,该架构利用基于ConvMixer的编码器和基于ConvNeXT的解码器,并结合卷积增强多头注意力来分割乳腺超声图像。增强的ConvMixer模块利用空间滤波和通道级集成来有效捕获局部和全局上下文特征,通过动态通道重新校准和残差连接增强特征相关性,从而提高分割精度。带有注意力机制的瓶颈通过利用多头注意力捕获长程依赖性来增强分割,从而使模型能够专注于不同区域的相关特征。具有挤压和激励功能的增强ConvNeXT模块利用深度卷积进行高效空间滤波,利用层归一化来稳定训练,并利用残差连接确保保留相关特征以进行准确分割。使用结合了二元交叉熵和骰子损失的组合损失函数来训练模型。

结果

在两个数据集上进行的综合实验证实,所提出的模型在分割复杂结构方面具有卓越性能,这两个数据集分别是乳腺超声图像数据集(BUSI)和乳腺超声图像的BrEaST数据集。该模型在BUSI和BrEaST数据集上分别实现了98.04%和94.84%的杰卡德指数以及99.01%和97.35%的骰子相似系数。

讨论

ConvMixer和ConvNeXT模块与卷积增强多头注意力相结合,增强了模型捕获局部和全局上下文信息的能力。该模型在BUSI和BrEaST数据集上的强大性能证明了模型的稳健性和泛化能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8aa9/12213868/b359a498f98f/frai-08-1572433-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验