Suppr超能文献

用于精确乳腺超声图像分割的双分支段式任意模型-Transformer融合网络

Dual branch segment anything model-transformer fusion network for accurate breast ultrasound image segmentation.

作者信息

Li Yu, Huang Jin, Zhang Yimin, Deng Jingwen, Zhang Jingwen, Dong Lan, Wang Du, Mei Liye, Lei Cheng

机构信息

The Institute of Technological Sciences, Wuhan University, Wuhan, China.

The Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China.

出版信息

Med Phys. 2025 Mar 19. doi: 10.1002/mp.17751.

Abstract

BACKGROUND

Precise and rapid ultrasound-based breast cancer diagnosis is essential for effective treatment. However, existing ultrasound image segmentation methods often fail to capture both global contextual features and fine-grained boundary details.

PURPOSE

This study proposes a dual-branch network architecture that combines the Swin Transformer and Segment Anything Model (SAM) to enhance breast ultrasound image (BUSI) segmentation accuracy and reliability.

METHODS

Our network integrates the global attention mechanism of the Swin Transformer with fine-grained boundary detection from SAM through a multi-stage feature fusion module. We evaluated our method against state-of-the-art methods on two datasets: the Breast Ultrasound Images dataset from Wuhan University (BUSI-WHU), which contains 927 images (560 benign and 367 malignant) with ground truth masks annotated by radiologists, and the public BUSI dataset. Performance was evaluated using mean Intersection-over-Union (mIoU), 95th percentile Hausdorff Distance (HD95) and Dice Similarity coefficients, with statistical significance assessed using two-tailed independent t-tests with Holm-Bonferroni correction ( ).

RESULTS

On our proposed dataset, the network achieved a mIoU of 90.82% and a HD95 of 23.50 pixels, demonstrating significant improvements over current state-of-the-art methods with effect sizes for mIoU ranging from 0.38 to 0.61 (p 0.05). On the BUSI dataset, the network achieved a mIoU of 82.83% and a HD95 of 71.13 pixels, demonstrating comparable improvements with effect sizes for mIoU ranging from 0.45 to 0.58 (p 0.05).

CONCLUSIONS

Our dual-branch network leverages the complementary strengths of Swin Transformer and SAM through a fusion mechanism, demonstrating superior breast ultrasound segmentation performance. Our code is publicly available at https://github.com/Skylanding/DSATNet.

摘要

背景

基于超声的乳腺癌精确快速诊断对于有效治疗至关重要。然而,现有的超声图像分割方法往往无法同时捕捉全局上下文特征和细粒度边界细节。

目的

本研究提出一种双分支网络架构,将Swin Transformer和分割一切模型(SAM)相结合,以提高乳腺超声图像(BUSI)分割的准确性和可靠性。

方法

我们的网络通过一个多阶段特征融合模块,将Swin Transformer的全局注意力机制与SAM的细粒度边界检测相结合。我们在两个数据集上针对当前最先进的方法评估了我们的方法:武汉大学的乳腺超声图像数据集(BUSI-WHU),其中包含927张图像(560张良性和367张恶性),带有放射科医生标注的真实掩码,以及公共BUSI数据集。使用平均交并比(mIoU)、第95百分位数豪斯多夫距离(HD95)和骰子相似系数评估性能,使用带有霍尔姆-邦费罗尼校正的双尾独立t检验评估统计显著性( )。

结果

在我们提出的数据集上,该网络实现了90.82%的mIoU和23.50像素的HD95,与当前最先进的方法相比有显著改进,mIoU的效应大小范围为0.38至0.61(p 0.05)。在BUSI数据集上,该网络实现了82.83%的mIoU和71.13像素的HD95,与当前最先进的方法相比有类似改进,mIoU的效应大小范围为0.45至0.58(p 0.05)。

结论

我们的双分支网络通过融合机制利用了Swin Transformer和SAM的互补优势,展现出卓越的乳腺超声分割性能。我们的代码可在https://github.com/Skylanding/DSATNet上公开获取。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验