Suppr超能文献

CQformer:医学图像分割中跨切片学习动态

CQformer: Learning Dynamics Across Slices in Medical Image Segmentation.

作者信息

Zhang Shengjie, Shen Xin, Chen Xiang, Yu Ziqi, Ren Bohan, Yang Haibo, Zhang Xiao-Yong, Zhou Yuan

出版信息

IEEE Trans Med Imaging. 2025 Feb;44(2):1043-1057. doi: 10.1109/TMI.2024.3477555. Epub 2025 Feb 4.

Abstract

Prevalent studies on deep learning-based 3D medical image segmentation capture the continuous variation across 2D slices mainly via convolution, Transformer, inter-slice interaction, and time series models. In this work, via modeling this variation by an ordinary differential equation (ODE), we propose a cross instance query-guided Transformer architecture (CQformer) that leverages features from preceding slices to improve the segmentation performance of subsequent slices. Its key components include a cross-attention mechanism in an ODE formulation, which bridges the features of contiguous 2D slices of the 3D volumetric data. In addition, a regression head is employed to shorten the gap between the bottleneck and the prediction layer. Extensive experiments on 7 datasets with various modalities (CT, MRI) and tasks (organ, tissue, and lesion) demonstrate that CQformer outperforms previous state-of-the-art segmentation algorithms on 6 datasets by 0.44%-2.45%, and achieves the second highest performance of 88.30% on the BTCV dataset. The code is available at https://github.com/qbmizsj/CQformer.

摘要

基于深度学习的3D医学图像分割的现有研究主要通过卷积、Transformer、切片间交互和时间序列模型来捕捉2D切片之间的连续变化。在这项工作中,我们通过用常微分方程(ODE)对这种变化进行建模,提出了一种跨实例查询引导的Transformer架构(CQformer),该架构利用先前切片的特征来提高后续切片的分割性能。其关键组件包括一个ODE形式的交叉注意力机制,它连接了3D体数据中相邻2D切片的特征。此外,还采用了一个回归头来缩短瓶颈层和预测层之间的差距。在7个具有不同模态(CT、MRI)和任务(器官、组织和病变)的数据集上进行的大量实验表明,CQformer在6个数据集上比以前的最先进分割算法性能高出0.44%-2.45%,并在BTCV数据集上达到了88.30%的第二高性能。代码可在https://github.com/qbmizsj/CQformer获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验