Suppr超能文献

ThreeF-Net:用于乳腺超声图像分割的细粒度特征融合网络。

ThreeF-Net: Fine-grained feature fusion network for breast ultrasound image segmentation.

作者信息

Bian Xuesheng, Liu Jia, Xu Sen, Liu Weiquan, Mei Leyi, Xiao Chaoshen, Yang Fan

机构信息

School Of Information Engineering, Yancheng Institute Of Technology, Hope Avenue Middle Road, Yancheng, 224051, Jiangsu, China.

School Of Information Engineering, Yancheng Institute Of Technology, Hope Avenue Middle Road, Yancheng, 224051, Jiangsu, China.

出版信息

Comput Biol Med. 2025 Aug;194:110527. doi: 10.1016/j.compbiomed.2025.110527. Epub 2025 Jun 14.

Abstract

Convolutional Neural Networks (CNNs) have achieved remarkable success in breast ultrasound image segmentation, but they still face several challenges when dealing with breast lesions. Due to the limitations of CNNs in modeling long-range dependencies, they often perform poorly in handling issues such as similar intensity distributions, irregular lesion shapes, and blurry boundaries, leading to low segmentation accuracy. To address these issues, we propose the ThreeF-Net, a fine-grained feature fusion network. This network combines the advantages of CNNs and Transformers, aiming to simultaneously capture local features and model long-range dependencies, thereby improving the accuracy and stability of segmentation tasks. Specifically, we designed a Transformer-assisted Dual Encoder Architecture (TDE), which integrates convolutional modules and self-attention modules to achieve collaborative learning of local and global features. Additionally, we designed a Global Group Feature Extraction (GGFE) module, which effectively fuses the features learned by CNNs and Transformers, enhancing feature representation ability. To further improve model performance, we also introduced a Dynamic Fine-grained Convolution (DFC) module, which significantly improves lesion boundary segmentation accuracy by dynamically adjusting convolution kernels and capturing multi-scale features. Comparative experiments with state-of-the-art segmentation methods on three public breast ultrasound datasets demonstrate that ThreeF-Net outperforms existing methods across multiple key evaluation metrics.

摘要

卷积神经网络(CNNs)在乳腺超声图像分割方面取得了显著成功,但在处理乳腺病变时仍面临一些挑战。由于卷积神经网络在建模长距离依赖关系方面存在局限性,它们在处理诸如相似强度分布、不规则病变形状和模糊边界等问题时往往表现不佳,导致分割准确率较低。为了解决这些问题,我们提出了ThreeF-Net,一种细粒度特征融合网络。该网络结合了卷积神经网络和Transformer的优点,旨在同时捕捉局部特征并对长距离依赖关系进行建模,从而提高分割任务的准确性和稳定性。具体而言,我们设计了一种Transformer辅助的双编码器架构(TDE),它集成了卷积模块和自注意力模块,以实现局部和全局特征的协同学习。此外,我们设计了一个全局组特征提取(GGFE)模块,它有效地融合了卷积神经网络和Transformer学到的特征,增强了特征表示能力。为了进一步提高模型性能,我们还引入了一个动态细粒度卷积(DFC)模块,它通过动态调整卷积核并捕捉多尺度特征,显著提高了病变边界分割的准确率。在三个公共乳腺超声数据集上与最先进的分割方法进行的对比实验表明,ThreeF-Net在多个关键评估指标上优于现有方法。

相似文献

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验