• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 PET 和 CT 图像的头颈部肿瘤全自动分割的信息融合。

Information fusion for fully automated segmentation of head and neck tumors from PET and CT images.

机构信息

Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.

Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada.

出版信息

Med Phys. 2024 Jan;51(1):319-333. doi: 10.1002/mp.16615. Epub 2023 Jul 20.

DOI:10.1002/mp.16615
PMID:37475591
Abstract

BACKGROUND

PET/CT images combining anatomic and metabolic data provide complementary information that can improve clinical task performance. PET image segmentation algorithms exploiting the multi-modal information available are still lacking.

PURPOSE

Our study aimed to assess the performance of PET and CT image fusion for gross tumor volume (GTV) segmentations of head and neck cancers (HNCs) utilizing conventional, deep learning (DL), and output-level voting-based fusions.

METHODS

The current study is based on a total of 328 histologically confirmed HNCs from six different centers. The images were automatically cropped to a 200 × 200 head and neck region box, and CT and PET images were normalized for further processing. Eighteen conventional image-level fusions were implemented. In addition, a modified U2-Net architecture as DL fusion model baseline was used. Three different input, layer, and decision-level information fusions were used. Simultaneous truth and performance level estimation (STAPLE) and majority voting to merge different segmentation outputs (from PET and image-level and network-level fusions), that is, output-level information fusion (voting-based fusions) were employed. Different networks were trained in a 2D manner with a batch size of 64. Twenty percent of the dataset with stratification concerning the centers (20% in each center) were used for final result reporting. Different standard segmentation metrics and conventional PET metrics, such as SUV, were calculated.

RESULTS

In single modalities, PET had a reasonable performance with a Dice score of 0.77 ± 0.09, while CT did not perform acceptably and reached a Dice score of only 0.38 ± 0.22. Conventional fusion algorithms obtained a Dice score range of [0.76-0.81] with guided-filter-based context enhancement (GFCE) at the low-end, and anisotropic diffusion and Karhunen-Loeve transform fusion (ADF), multi-resolution singular value decomposition (MSVD), and multi-level image decomposition based on latent low-rank representation (MDLatLRR) at the high-end. All DL fusion models achieved Dice scores of 0.80. Output-level voting-based models outperformed all other models, achieving superior results with a Dice score of 0.84 for Majority_ImgFus, Majority_All, and Majority_Fast. A mean error of almost zero was achieved for all fusions using SUV , SUV and SUV .

CONCLUSION

PET/CT information fusion adds significant value to segmentation tasks, considerably outperforming PET-only and CT-only methods. In addition, both conventional image-level and DL fusions achieve competitive results. Meanwhile, output-level voting-based fusion using majority voting of several algorithms results in statistically significant improvements in the segmentation of HNC.

摘要

背景

PET/CT 图像结合解剖和代谢数据提供互补信息,可提高临床任务的性能。利用可用的多模态信息的 PET 图像分割算法仍然缺乏。

目的

本研究旨在评估利用常规、深度学习(DL)和输出级投票融合对头部和颈部癌症(HNC)的大体肿瘤体积(GTV)进行 PET 和 CT 图像融合的性能。

方法

本研究基于来自六个不同中心的 328 例经组织学证实的 HNC。图像被自动裁剪为 200×200 的头颈部区域框,对 CT 和 PET 图像进行归一化以进行进一步处理。实施了 18 种常规图像级融合。此外,还使用了修改后的 U2-Net 架构作为 DL 融合模型基线。使用三种不同的输入、层和决策级信息融合。采用同时真实和性能水平估计(STAPLE)和多数投票来合并不同的分割输出(来自 PET 和图像级和网络级融合),即输出级信息融合(基于投票的融合)。不同的网络以批处理大小为 64 的 2D 方式进行训练。使用分层(每个中心 20%)的数据集的 20%用于最终结果报告。计算了不同的标准分割指标和常规 PET 指标,如 SUV。

结果

在单模态中,PET 表现出合理的性能,Dice 评分达到 0.77±0.09,而 CT 表现不佳,仅达到 0.38±0.22 的 Dice 评分。常规融合算法的 Dice 评分范围为[0.76-0.81],其中基于导向滤波器的上下文增强(GFCE)在低端,各向异性扩散和 Karhunen-Loeve 变换融合(ADF)、多分辨率奇异值分解(MSVD)和基于潜在低秩表示的多层图像分解(MDLatLRR)在高端。所有的 DL 融合模型都达到了 0.80 的 Dice 评分。输出级投票的基于模型的模型表现优于所有其他模型,使用 Majority_ImgFus、 Majority_All 和 Majority_Fast 达到了 0.84 的 Dice 评分。使用 SUV、SUV 和 SUV ,所有融合都实现了几乎为零的平均误差。

结论

PET/CT 信息融合为分割任务增加了显著价值,明显优于 PET 仅和 CT 仅方法。此外,常规图像级和 DL 融合都取得了有竞争力的结果。同时,使用几种算法的多数投票的输出级投票融合导致 HNC 分割的统计显著改善。

相似文献

1
Information fusion for fully automated segmentation of head and neck tumors from PET and CT images.基于 PET 和 CT 图像的头颈部肿瘤全自动分割的信息融合。
Med Phys. 2024 Jan;51(1):319-333. doi: 10.1002/mp.16615. Epub 2023 Jul 20.
2
Multi-modal segmentation with missing image data for automatic delineation of gross tumor volumes in head and neck cancers.多模态分割中存在图像缺失数据的情况下,实现头颈部癌症大体肿瘤体积的自动勾画。
Med Phys. 2024 Oct;51(10):7295-7307. doi: 10.1002/mp.17260. Epub 2024 Jun 19.
3
A comparison of methods for fully automatic segmentation of tumors and involved nodes in PET/CT of head and neck cancers.头颈部癌症正电子发射断层扫描/计算机断层扫描中肿瘤和累及淋巴结全自动分割方法的比较。
Phys Med Biol. 2021 Mar 4;66(6):065012. doi: 10.1088/1361-6560/abe553.
4
Fully Automated Gross Tumor Volume Delineation From PET in Head and Neck Cancer Using Deep Learning Algorithms.基于深度学习算法的头颈部癌症正电子发射断层扫描全自动化大体肿瘤体积勾画。
Clin Nucl Med. 2021 Nov 1;46(11):872-883. doi: 10.1097/RLU.0000000000003789.
5
Gross tumor volume segmentation for head and neck cancer radiotherapy using deep dense multi-modality network.使用深度密集多模态网络对头颈部癌症放射治疗的大体肿瘤体积分割。
Phys Med Biol. 2019 Oct 16;64(20):205015. doi: 10.1088/1361-6560/ab440d.
6
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.SwinCross:用于 PET/CT 图像中头颈部肿瘤分割的跨模态 Swin 变换器。
Med Phys. 2024 Mar;51(3):2096-2107. doi: 10.1002/mp.16703. Epub 2023 Sep 30.
7
Decentralized Distributed Multi-institutional PET Image Segmentation Using a Federated Deep Learning Framework.使用联邦深度学习框架进行去中心化分布式多机构 PET 图像分割。
Clin Nucl Med. 2022 Jul 1;47(7):606-617. doi: 10.1097/RLU.0000000000004194. Epub 2022 Apr 20.
8
Deep learning generation of preclinical positron emission tomography (PET) images from low-count PET with task-based performance assessment.基于任务的性能评估的从低计数正电子发射断层扫描(PET)数据深度学习生成临床前 PET 图像。
Med Phys. 2024 Jun;51(6):4324-4339. doi: 10.1002/mp.17105. Epub 2024 May 6.
9
Comparing different CT, PET and MRI multi-modality image combinations for deep learning-based head and neck tumor segmentation.比较基于深度学习的头颈部肿瘤分割的不同 CT、PET 和 MRI 多模态图像组合。
Acta Oncol. 2021 Nov;60(11):1399-1406. doi: 10.1080/0284186X.2021.1949034. Epub 2021 Jul 15.
10
Comparison of deep learning networks for fully automated head and neck tumor delineation on multi-centric PET/CT images.多中心 PET/CT 图像上全自动头颈部肿瘤勾画的深度学习网络比较。
Radiat Oncol. 2024 Jan 8;19(1):3. doi: 10.1186/s13014-023-02388-0.

引用本文的文献

1
Recent advances in PET/MR imaging for head and neck tumors: a systematic review of the last three years.正电子发射断层显像/磁共振成像在头颈部肿瘤中的最新进展:过去三年的系统综述
Radiol Med. 2025 Aug 28. doi: 10.1007/s11547-025-02075-y.
2
Deep Learning-Based Detection of Separated Root Canal Instruments in Panoramic Radiographs Using a U-Net Architecture.基于深度学习的使用U-Net架构检测全景X光片中分离的根管器械
Diagnostics (Basel). 2025 Jul 9;15(14):1744. doi: 10.3390/diagnostics15141744.
3
Development and validation of pan-cancer lesion segmentation AI-model for whole-body 18F-FDG PET/CT in diverse clinical cohorts.
针对不同临床队列的全身18F-FDG PET/CT的泛癌病变分割人工智能模型的开发与验证
Comput Biol Med. 2025 May;190:110052. doi: 10.1016/j.compbiomed.2025.110052. Epub 2025 Mar 23.
4
Machine learning-based analysis of Ga-PSMA-11 PET/CT images for estimation of prostate tumor grade.基于机器学习的 Ga-PSMA-11 PET/CT 图像分析用于估计前列腺肿瘤分级。
Phys Eng Sci Med. 2024 Jun;47(2):741-753. doi: 10.1007/s13246-024-01402-3. Epub 2024 Mar 25.