Suppr超能文献

nnFormer:通过3D变压器进行体积医学图像分割

nnFormer: Volumetric Medical Image Segmentation via a 3D Transformer.

作者信息

Zhou Hong-Yu, Guo Jiansen, Zhang Yinghao, Han Xiaoguang, Yu Lequan, Wang Liansheng, Yu Yizhou

出版信息

IEEE Trans Image Process. 2023;32:4036-4045. doi: 10.1109/TIP.2023.3293771. Epub 2023 Jul 19.

Abstract

Transformer, the model of choice for natural language processing, has drawn scant attention from the medical imaging community. Given the ability to exploit long-term dependencies, transformers are promising to help atypical convolutional neural networks to learn more contextualized visual representations. However, most of recently proposed transformer-based segmentation approaches simply treated transformers as assisted modules to help encode global context into convolutional representations. To address this issue, we introduce nnFormer (i.e., not-another transFormer), a 3D transformer for volumetric medical image segmentation. nnFormer not only exploits the combination of interleaved convolution and self-attention operations, but also introduces local and global volume-based self-attention mechanism to learn volume representations. Moreover, nnFormer proposes to use skip attention to replace the traditional concatenation/summation operations in skip connections in U-Net like architecture. Experiments show that nnFormer significantly outperforms previous transformer-based counterparts by large margins on three public datasets. Compared to nnUNet, the most widely recognized convnet-based 3D medical segmentation model, nnFormer produces significantly lower HD95 and is much more computationally efficient. Furthermore, we show that nnFormer and nnUNet are highly complementary to each other in model ensembling. Codes and models of nnFormer are available at https://git.io/JSf3i.

摘要

Transformer作为自然语言处理的首选模型,在医学影像领域却很少受到关注。鉴于其能够利用长期依赖关系,Transformer有望帮助非典型卷积神经网络学习更多上下文相关的视觉表示。然而,最近提出的大多数基于Transformer的分割方法只是简单地将Transformer视为辅助模块,以帮助将全局上下文编码到卷积表示中。为了解决这个问题,我们引入了nnFormer(即,不是另一个Transformer),一种用于体医学图像分割的3D Transformer。nnFormer不仅利用了交错卷积和自注意力操作的组合,还引入了基于局部和全局体的自注意力机制来学习体表示。此外,nnFormer提议使用跳跃注意力来取代U-Net架构中跳跃连接中的传统拼接/求和操作。实验表明,nnFormer在三个公共数据集上显著优于之前基于Transformer的同类方法。与最广泛认可的基于卷积网络的3D医学分割模型nnUNet相比,nnFormer产生的HD95显著更低,并且计算效率更高。此外,我们表明nnFormer和nnUNet在模型集成中具有高度互补性。nnFormer的代码和模型可在https://git.io/JSf3i获取。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验