• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

LSAM:用于自动 3D 多模态头颈部肿瘤分割的 L2-范数自注意力和潜在空间特征交互。

LSAM: L2-norm self-attention and latent space feature interaction for automatic 3D multi-modal head and neck tumor segmentation.

机构信息

College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China.

School of Science, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China.

出版信息

Phys Med Biol. 2023 Nov 6;68(22). doi: 10.1088/1361-6560/ad04a8.

DOI:10.1088/1361-6560/ad04a8
PMID:37852283
Abstract

Head and neck (H&N) cancers are prevalent globally, and early and accurate detection is absolutely crucial for timely and effective treatment. However, the segmentation of H&N tumors is challenging due to the similar density of the tumors and surrounding tissues in CT images. While positron emission computed tomography (PET) images provide information about the metabolic activity of the tissue and can distinguish between lesion regions and normal tissue. But they are limited by their low spatial resolution. To fully leverage the complementary information from PET and CT images, we propose a novel and innovative multi-modal tumor segmentation method specifically designed for H&N tumor segmentation.The proposed novel and innovative multi-modal tumor segmentation network (LSAM) consists of two key learning modules, namely L2-Norm self-attention and latent space feature interaction, which exploit the high sensitivity of PET images and the anatomical information of CT images. These two advanced modules contribute to a powerful 3D segmentation network based on a U-shaped structure. The well-designed segmentation method can integrate complementary features from different modalities at multiple scales, thereby improving the feature interaction between modalities.We evaluated the proposed method on the public HECKTOR PET-CT dataset, and the experimental results demonstrate that the proposed method convincingly outperforms existing H&N tumor segmentation methods in terms of key evaluation metrics, including DSC (0.8457), Jaccard (0.7756), RVD (0.0938), and HD95 (11.75).The innovative Self-Attention mechanism based on L2-Norm offers scalability and is effective in reducing the impact of outliers on the performance of the model. And the novel method for multi-scale feature interaction based on Latent Space utilizes the learning process in the encoder phase to achieve the best complementary effects among different modalities.

摘要

头颈部(H&N)癌症在全球范围内普遍存在,早期准确的检测对于及时有效的治疗至关重要。然而,由于 CT 图像中肿瘤和周围组织的密度相似,H&N 肿瘤的分割具有挑战性。虽然正电子发射断层扫描(PET)图像提供了关于组织代谢活性的信息,并且可以区分病变区域和正常组织。但它们受到其低空间分辨率的限制。为了充分利用来自 PET 和 CT 图像的互补信息,我们提出了一种专门用于 H&N 肿瘤分割的新型创新多模态肿瘤分割方法。所提出的新型创新多模态肿瘤分割网络(LSAM)由两个关键学习模块组成,即 L2-范数自注意力和潜在空间特征交互,它们利用了 PET 图像的高灵敏度和 CT 图像的解剖信息。这两个先进的模块有助于基于 U 形结构的强大 3D 分割网络。精心设计的分割方法可以在多个尺度上整合来自不同模态的互补特征,从而提高模态之间的特征交互。我们在公共 HECKTOR PET-CT 数据集上评估了所提出的方法,实验结果表明,所提出的方法在关键评估指标方面,包括 DSC(0.8457)、Jaccard(0.7756)、RVD(0.0938)和 HD95(11.75)方面,都优于现有的 H&N 肿瘤分割方法。基于 L2-范数的创新自注意力机制具有可扩展性,并且可以有效地减少异常值对模型性能的影响。基于潜在空间的新型多尺度特征交互方法利用编码器阶段的学习过程,实现了不同模态之间的最佳互补效果。

相似文献

1
LSAM: L2-norm self-attention and latent space feature interaction for automatic 3D multi-modal head and neck tumor segmentation.LSAM:用于自动 3D 多模态头颈部肿瘤分割的 L2-范数自注意力和潜在空间特征交互。
Phys Med Biol. 2023 Nov 6;68(22). doi: 10.1088/1361-6560/ad04a8.
2
Efficient model-informed co-segmentation of tumors on PET/CT driven by clustering and classification information.基于聚类和分类信息驱动的 PET/CT 上肿瘤的高效模型引导的共分割。
Comput Biol Med. 2024 Sep;180:108980. doi: 10.1016/j.compbiomed.2024.108980. Epub 2024 Aug 12.
3
Structural semantic-guided MR synthesis from PET images via a dual cross-attention mechanism.通过双交叉注意力机制从PET图像进行结构语义引导的MR合成。
Med Phys. 2025 Jul;52(7):e17957. doi: 10.1002/mp.17957.
4
Multi-level channel-spatial attention and light-weight scale-fusion network (MCSLF-Net): multi-level channel-spatial attention and light-weight scale-fusion transformer for 3D brain tumor segmentation.多级通道空间注意力与轻量级尺度融合网络(MCSLF-Net):用于3D脑肿瘤分割的多级通道空间注意力与轻量级尺度融合变换器
Quant Imaging Med Surg. 2025 Jul 1;15(7):6301-6325. doi: 10.21037/qims-2025-354. Epub 2025 Jun 30.
5
Head and neck tumor segmentation from [F]F-FDG PET/CT images based on 3D diffusion model.基于三维扩散模型的 [F]F-FDG PET/CT 图像头颈部肿瘤分割。
Phys Med Biol. 2024 Jul 16;69(15). doi: 10.1088/1361-6560/ad5ef2.
6
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.SwinCross:用于 PET/CT 图像中头颈部肿瘤分割的跨模态 Swin 变换器。
Med Phys. 2024 Mar;51(3):2096-2107. doi: 10.1002/mp.16703. Epub 2023 Sep 30.
7
Short-Term Memory Impairment短期记忆障碍
8
CT-Less Whole-Body Bone Segmentation of PET Images Using a Multimodal Deep Learning Network.使用多模态深度学习网络对PET图像进行无CT全身骨分割
IEEE J Biomed Health Inform. 2025 Feb;29(2):1151-1164. doi: 10.1109/JBHI.2024.3501386. Epub 2025 Feb 10.
9
Large-scale convolutional neural network for clinical target and multi-organ segmentation in gynecologic brachytherapy via multi-stage learning.基于多阶段学习的大规模卷积神经网络用于妇科近距离放疗中的临床靶区和多器官分割
Med Phys. 2025 Aug;52(8):e18067. doi: 10.1002/mp.18067.
10
Diffusion semantic segmentation model: A generative model for medical image segmentation based on joint distribution.扩散语义分割模型:一种基于联合分布的医学图像分割生成模型。
Med Phys. 2025 Jul;52(7):e17928. doi: 10.1002/mp.17928. Epub 2025 Jun 8.