Suppr超能文献

CSAM:用于各向异性体医学图像分割的2.5D交叉切片注意力模块

CSAM: A 2.5D Cross-Slice Attention Module for Anisotropic Volumetric Medical Image Segmentation.

作者信息

Yu Hung Alex Ling, Zheng Haoxin, Zhao Kai, Du Xiaoxi, Pang Kaifeng, Miao Qi, Raman Steven S, Terzopoulos Demetri, Sung Kyunghyun

机构信息

University of California, Los Angeles.

出版信息

IEEE Winter Conf Appl Comput Vis. 2024 Jan;2024:5911-5920. doi: 10.1109/wacv57701.2024.00582. Epub 2024 Apr 9.

Abstract

A large portion of volumetric medical data, especially magnetic resonance imaging (MRI) data, is anisotropic, as the through-plane resolution is typically much lower than the in-plane resolution. Both 3D and purely 2D deep learning-based segmentation methods are deficient in dealing with such volumetric data since the performance of 3D methods suffers when confronting anisotropic data, and 2D methods disregard crucial volumetric information. Insufficient work has been done on 2.5D methods, in which 2D convolution is mainly used in concert with volumetric information. These models focus on learning the relationship across slices, but typically have many parameters to train. We offer a Cross-Slice Attention Module (CSAM) with minimal trainable parameters, which captures information across all the slices in the volume by applying semantic, positional, and slice attention on deep feature maps at different scales. Our extensive experiments using different network architectures and tasks demonstrate the usefulness and generalizability of CSAM. Associated code is available at https://github.com/aL3x-O-o-Hung/CSAM.

摘要

大部分体医学数据,尤其是磁共振成像(MRI)数据,都是各向异性的,因为层面分辨率通常远低于平面分辨率。基于3D和纯2D深度学习的分割方法在处理此类体数据时都存在不足,因为3D方法在面对各向异性数据时性能会受到影响,而2D方法则忽略了关键的体信息。关于2.5D方法的研究工作不足,在2.5D方法中,2D卷积主要与体信息协同使用。这些模型专注于学习切片间的关系,但通常有许多参数需要训练。我们提供了一个具有最少可训练参数的跨切片注意力模块(CSAM),它通过对不同尺度的深度特征图应用语义、位置和切片注意力,来捕获体中所有切片的信息。我们使用不同网络架构和任务进行的广泛实验证明了CSAM的有效性和通用性。相关代码可在https://github.com/aL3x-O-o-Hung/CSAM获取。

相似文献

4
Volumetric memory network for interactive medical image segmentation.用于交互式医学图像分割的体积记忆网络。
Med Image Anal. 2023 Jan;83:102599. doi: 10.1016/j.media.2022.102599. Epub 2022 Sep 6.
6
CQformer: Learning Dynamics Across Slices in Medical Image Segmentation.CQformer:医学图像分割中跨切片学习动态
IEEE Trans Med Imaging. 2025 Feb;44(2):1043-1057. doi: 10.1109/TMI.2024.3477555. Epub 2025 Feb 4.

本文引用的文献

4
Multiple Sclerosis Lesion Segmentation with Tiramisu and 2.5D Stacked Slices.基于提拉米苏和2.5D堆叠切片的多发性硬化症病变分割
Med Image Comput Comput Assist Interv. 2019 Oct;11766:338-346. doi: 10.1007/978-3-030-32248-9_38. Epub 2019 Oct 10.
5
MSU-Net: Multi-Scale U-Net for 2D Medical Image Segmentation.MSU-Net:用于二维医学图像分割的多尺度U-Net
Front Genet. 2021 Feb 11;12:639930. doi: 10.3389/fgene.2021.639930. eCollection 2021.
6
UNet++: A Nested U-Net Architecture for Medical Image Segmentation.U-Net++:一种用于医学图像分割的嵌套U-Net架构。
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:3-11. doi: 10.1007/978-3-030-00889-5_1. Epub 2018 Sep 20.
9
Focal Loss for Dense Object Detection.用于密集目标检测的焦散损失
IEEE Trans Pattern Anal Mach Intell. 2020 Feb;42(2):318-327. doi: 10.1109/TPAMI.2018.2858826. Epub 2018 Jul 23.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验