• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DMCT-Net:用于 PET/CT 中头颈部肿瘤分割的双模块卷积变换网络。

DMCT-Net: dual modules convolution transformer network for head and neck tumor segmentation in PET/CT.

机构信息

College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590 Shandong, People's Republic of China.

College of Computer Science and Engineering, Qufu Normal University, Rizhao, 276827, People's Republic of China.

出版信息

Phys Med Biol. 2023 May 22;68(11). doi: 10.1088/1361-6560/acd29f.

DOI:10.1088/1361-6560/acd29f
PMID:37141902
Abstract

Accurate segmentation of head and neck (H&N) tumors is critical in radiotherapy. However, the existing methods lack effective strategies to integrate local and global information, strong semantic information and context information, and spatial and channel features, which are effective clues to improve the accuracy of tumor segmentation. In this paper, we propose a novel method called dual modules convolution transformer network (DMCT-Net) for H&N tumor segmentation in the fluorodeoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) images.The DMCT-Net consists of the convolution transformer block (CTB), the squeeze and excitation (SE) pool module, and the multi-attention fusion (MAF) module. First, the CTB is designed to capture the remote dependency and local multi-scale receptive field information by using the standard convolution, the dilated convolution, and the transformer operation. Second, to extract feature information from different angles, we construct the SE pool module, which not only extracts strong semantic features and context features simultaneously but also uses the SE normalization to adaptively fuse features and adjust feature distribution. Third, the MAF module is proposed to combine the global context information, channel information, and voxel-wise local spatial information. Besides, we adopt the up-sampling auxiliary paths to supplement the multi-scale information.The experimental results show that the method has better or more competitive segmentation performance than several advanced methods on three datasets. The best segmentation metric scores are as follows: DSC of 0.781, HD95 of 3.044, precision of 0.798, and sensitivity of 0.857. Comparative experiments based on bimodal and single modal indicate that bimodal input provides more sufficient and effective information for improving tumor segmentation performance. Ablation experiments verify the effectiveness and significance of each module.We propose a new network for 3D H&N tumor segmentation in FDG-PET/CT images, which achieves high accuracy.

摘要

准确的头颈部(H&N)肿瘤分割在放射治疗中至关重要。然而,现有的方法缺乏有效策略来整合局部和全局信息、强语义信息和上下文信息以及空间和通道特征,这些都是提高肿瘤分割准确性的有效线索。在本文中,我们提出了一种名为双模块卷积变换网络(DMCT-Net)的新方法,用于在氟脱氧葡萄糖正电子发射断层扫描/计算机断层扫描(FDG-PET/CT)图像中对头颈部肿瘤进行分割。DMCT-Net 由卷积变换块(CTB)、挤压和激励(SE)池模块以及多注意融合(MAF)模块组成。首先,CTB 通过使用标准卷积、扩张卷积和变换操作来捕获远程依赖关系和局部多尺度感受野信息。其次,为了从不同角度提取特征信息,我们构建了 SE 池模块,它不仅同时提取强语义特征和上下文特征,还使用 SE 归一化自适应地融合特征并调整特征分布。第三,MAF 模块用于结合全局上下文信息、通道信息和体素局部空间信息。此外,我们采用上采样辅助路径来补充多尺度信息。实验结果表明,该方法在三个数据集上的分割性能优于几种先进方法。最佳分割指标得分如下:DSC 为 0.781,HD95 为 3.044,精度为 0.798,灵敏度为 0.857。基于双模态和单模态的对比实验表明,双模态输入为提高肿瘤分割性能提供了更充分和有效的信息。消融实验验证了每个模块的有效性和重要性。我们提出了一种新的用于 FDG-PET/CT 图像中 3D H&N 肿瘤分割的网络,实现了高精度。

相似文献

1
DMCT-Net: dual modules convolution transformer network for head and neck tumor segmentation in PET/CT.DMCT-Net:用于 PET/CT 中头颈部肿瘤分割的双模块卷积变换网络。
Phys Med Biol. 2023 May 22;68(11). doi: 10.1088/1361-6560/acd29f.
2
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.SwinCross:用于 PET/CT 图像中头颈部肿瘤分割的跨模态 Swin 变换器。
Med Phys. 2024 Mar;51(3):2096-2107. doi: 10.1002/mp.16703. Epub 2023 Sep 30.
3
LSAM: L2-norm self-attention and latent space feature interaction for automatic 3D multi-modal head and neck tumor segmentation.LSAM:用于自动 3D 多模态头颈部肿瘤分割的 L2-范数自注意力和潜在空间特征交互。
Phys Med Biol. 2023 Nov 6;68(22). doi: 10.1088/1361-6560/ad04a8.
4
A modality-collaborative convolution and transformer hybrid network for unpaired multi-modal medical image segmentation with limited annotations.一种用于具有有限标注的未配对多模态医学图像分割的模态协作卷积与Transformer混合网络。
Med Phys. 2023 Sep;50(9):5460-5478. doi: 10.1002/mp.16338. Epub 2023 Mar 15.
5
A transformer-guided cross-modality adaptive feature fusion framework for esophageal gross tumor volume segmentation.基于Transformer 的跨模态自适应特征融合框架用于食管大体肿瘤体积分割。
Comput Methods Programs Biomed. 2024 Jun;251:108216. doi: 10.1016/j.cmpb.2024.108216. Epub 2024 May 11.
6
ISA-Net: Improved spatial attention network for PET-CT tumor segmentation.ISA-Net:用于 PET-CT 肿瘤分割的改进空间注意网络。
Comput Methods Programs Biomed. 2022 Nov;226:107129. doi: 10.1016/j.cmpb.2022.107129. Epub 2022 Sep 16.
7
[Breast cancer lesion segmentation based on co-learning feature fusion and Transformer].基于协同学习特征融合与Transformer的乳腺癌病灶分割
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2024 Apr 25;41(2):237-245. doi: 10.7507/1001-5515.202306063.
8
MFCNet: A multi-modal fusion and calibration networks for 3D pancreas tumor segmentation on PET-CT images.MFCNet:一种用于PET-CT图像上3D胰腺肿瘤分割的多模态融合与校准网络。
Comput Biol Med. 2023 Mar;155:106657. doi: 10.1016/j.compbiomed.2023.106657. Epub 2023 Feb 10.
9
ETUNet:Exploring efficient transformer enhanced UNet for 3D brain tumor segmentation.ETUNet:探索高效的基于Transformer 的增强型 UNet 进行 3D 脑肿瘤分割。
Comput Biol Med. 2024 Mar;171:108005. doi: 10.1016/j.compbiomed.2024.108005. Epub 2024 Jan 23.
10
TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images.TransConver:用于在MRI图像中开发自动脑肿瘤分割的变压器与卷积并行网络。
Quant Imaging Med Surg. 2022 Apr;12(4):2397-2415. doi: 10.21037/qims-21-919.

引用本文的文献

1
Segmentation-Free Outcome Prediction from Head and Neck Cancer PET/CT Images: Deep Learning-Based Feature Extraction from Multi-Angle Maximum Intensity Projections (MA-MIPs).基于头颈癌PET/CT图像的无分割结果预测:从多角度最大强度投影(MA-MIPs)中进行深度学习特征提取
Cancers (Basel). 2024 Jul 14;16(14):2538. doi: 10.3390/cancers16142538.