Suppr超能文献

基于头颈癌PET/CT图像的无分割结果预测:从多角度最大强度投影(MA-MIPs)中进行深度学习特征提取

Segmentation-Free Outcome Prediction from Head and Neck Cancer PET/CT Images: Deep Learning-Based Feature Extraction from Multi-Angle Maximum Intensity Projections (MA-MIPs).

作者信息

Toosi Amirhosein, Shiri Isaac, Zaidi Habib, Rahmim Arman

机构信息

Department of Radiology, University of British Columbia, Vancouver, BC V5Z 1M9, Canada.

Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada.

出版信息

Cancers (Basel). 2024 Jul 14;16(14):2538. doi: 10.3390/cancers16142538.

Abstract

We introduce an innovative, simple, effective segmentation-free approach for survival analysis of head and neck cancer (HNC) patients from PET/CT images. By harnessing deep learning-based feature extraction techniques and multi-angle maximum intensity projections (MA-MIPs) applied to Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) images, our proposed method eliminates the need for manual segmentations of regions-of-interest (ROIs) such as primary tumors and involved lymph nodes. Instead, a state-of-the-art object detection model is trained utilizing the CT images to perform automatic cropping of the head and neck anatomical area, instead of only the lesions or involved lymph nodes on the PET volumes. A pre-trained deep convolutional neural network backbone is then utilized to extract deep features from MA-MIPs obtained from 72 multi-angel axial rotations of the cropped PET volumes. These deep features extracted from multiple projection views of the PET volumes are then aggregated and fused, and employed to perform recurrence-free survival analysis on a cohort of 489 HNC patients. The proposed approach outperforms the best performing method on the target dataset for the task of recurrence-free survival analysis. By circumventing the manual delineation of the malignancies on the FDG PET-CT images, our approach eliminates the dependency on subjective interpretations and highly enhances the reproducibility of the proposed survival analysis method. The code for this work is publicly released.

摘要

我们提出了一种创新、简单且有效的无分割方法,用于对头颈部癌(HNC)患者的PET/CT图像进行生存分析。通过利用基于深度学习的特征提取技术以及应用于氟脱氧葡萄糖正电子发射断层扫描(FDG-PET)图像的多角度最大强度投影(MA-MIPs),我们提出的方法无需手动分割感兴趣区域(ROIs),如原发性肿瘤和受累淋巴结。相反,利用CT图像训练一个先进的目标检测模型,对头颈部解剖区域进行自动裁剪,而不仅仅是PET容积上的病变或受累淋巴结。然后,使用预训练的深度卷积神经网络主干,从裁剪后的PET容积的72个多角度轴向旋转获得的MA-MIPs中提取深度特征。从PET容积的多个投影视图中提取的这些深度特征随后进行聚合和融合,并用于对489例HNC患者的队列进行无复发生存分析。在无复发生存分析任务中,所提出的方法优于目标数据集上表现最佳的方法。通过避免在FDG PET-CT图像上手动描绘恶性肿瘤,我们的方法消除了对主观解释的依赖,并极大地提高了所提出的生存分析方法的可重复性。这项工作的代码已公开发布。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db6e/11274485/5ca5488f9270/cancers-16-02538-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验