Suppr超能文献

使用多变压器U-Net增强医学图像分割

Enhancing medical image segmentation with a multi-transformer U-Net.

作者信息

Dan Yongping, Jin Weishou, Yue Xuebin, Wang Zhida

机构信息

School of Electronic and Information, Zhongyuan University Of Technology, Zhengzhou, Henan, China.

Research Organization of Science and Technology, Ritsumeikan University, Kusatsu, Japan.

出版信息

PeerJ. 2024 Feb 29;12:e17005. doi: 10.7717/peerj.17005. eCollection 2024.

Abstract

Various segmentation networks based on Swin Transformer have shown promise in medical segmentation tasks. Nonetheless, challenges such as lower accuracy and slower training convergence have persisted. To tackle these issues, we introduce a novel approach that combines the Swin Transformer and Deformable Transformer to enhance overall model performance. We leverage the Swin Transformer's window attention mechanism to capture local feature information and employ the Deformable Transformer to adjust sampling positions dynamically, accelerating model convergence and aligning it more closely with object shapes and sizes. By amalgamating both Transformer modules and incorporating additional skip connections to minimize information loss, our proposed model excels at rapidly and accurately segmenting CT or X-ray lung images. Experimental results demonstrate the remarkable, showcasing the significant prowess of our model. It surpasses the performance of the standalone Swin Transformer's Swin Unet and converges more rapidly under identical conditions, yielding accuracy improvements of 0.7% (resulting in 88.18%) and 2.7% (resulting in 98.01%) on the COVID-19 CT scan lesion segmentation dataset and Chest X-ray Masks and Labels dataset, respectively. This advancement has the potential to aid medical practitioners in early diagnosis and treatment decision-making.

摘要

基于Swin Transformer的各种分割网络在医学分割任务中已展现出潜力。尽管如此,诸如准确率较低和训练收敛较慢等挑战依然存在。为解决这些问题,我们引入了一种新颖的方法,将Swin Transformer和可变形Transformer相结合,以提升整体模型性能。我们利用Swin Transformer的窗口注意力机制来捕捉局部特征信息,并采用可变形Transformer动态调整采样位置,加速模型收敛并使其更紧密地与物体形状和大小对齐。通过融合这两个Transformer模块并纳入额外的跳跃连接以最小化信息损失,我们提出的模型在快速准确地分割CT或X光肺部图像方面表现出色。实验结果令人瞩目,展示了我们模型的强大能力。它超越了独立的Swin Transformer的Swin Unet的性能,并且在相同条件下收敛更快,在COVID-19 CT扫描病变分割数据集和胸部X光掩码与标签数据集上分别实现了0.7%(达到88.18%)和2.7%(达到98.01%)的准确率提升。这一进展有可能帮助医学从业者进行早期诊断和治疗决策。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/137b/10909362/c7a3213b431b/peerj-12-17005-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验