Suppr超能文献

将深度神经网络与Transformer架构相结合用于宫颈癌的自动分割和生存预测。

Integrating a deep neural network and Transformer architecture for the automatic segmentation and survival prediction in cervical cancer.

作者信息

Zhu Shitao, Lin Ling, Liu Qin, Liu Jing, Song Yanwen, Xu Qin

机构信息

College of Computer and Data Science, Fuzhou University, Fuzhou, China.

Department of Gynecology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China.

出版信息

Quant Imaging Med Surg. 2024 Aug 1;14(8):5408-5419. doi: 10.21037/qims-24-560. Epub 2024 Jul 16.

Abstract

BACKGROUND

Automated tumor segmentation and survival prediction are critical to clinical diagnosis and treatment. This study aimed to develop deep-learning models for automatic tumor segmentation and survival prediction in magnetic resonance imaging (MRI) of cervical cancer (CC) by combining deep neural networks and Transformer architecture.

METHODS

This study included 406 patients with CC, each with comprehensive clinical information and MRI scans. We randomly divided patients into training, validation, and independent test cohorts in a 6:2:2 ratio. During the model training, we employed two architecture types: one being a hybrid model combining convolutional neural network (CNN) and ransformer (CoTr) and one of pure CNNs. For survival prediction, the hybrid model combined tumor image features extracted by segmentation models with clinical information. The performance of the segmentation models was evaluated using the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95). The performance of the survival models was assessed using the concordance index.

RESULTS

The CoTr model performed well in both contrast-enhanced T1-weighted (ceT1W) and T2-weighted (T2W) imaging segmentation tasks, with average DSCs of 0.827 and 0.820, respectively, which outperformed other the CNN models such as U-Net (DSC: 0.807 and 0.808), attention U-Net (DSC: 0.814 and 0.811), and V-Net (DSC: 0.805 and 0.807). For survival prediction, the proposed deep-learning model significantly outperformed traditional methods, yielding a concordance index of 0.732. Moreover, it effectively divided patients into low-risk and high-risk groups for disease progression (P<0.001).

CONCLUSIONS

Combining Transformer architecture with a CNN can improve MRI tumor segmentation, and this deep-learning model excelled in the survival prediction of patients with CC as compared to traditional methods.

摘要

背景

自动肿瘤分割和生存预测对临床诊断和治疗至关重要。本研究旨在通过结合深度神经网络和Transformer架构,开发用于宫颈癌(CC)磁共振成像(MRI)中自动肿瘤分割和生存预测的深度学习模型。

方法

本研究纳入了406例CC患者,每位患者均有全面的临床信息和MRI扫描数据。我们以6:2:2的比例将患者随机分为训练、验证和独立测试队列。在模型训练期间,我们采用了两种架构类型:一种是结合卷积神经网络(CNN)和Transformer的混合模型(CoTr),另一种是纯CNN模型。对于生存预测,混合模型将分割模型提取的肿瘤图像特征与临床信息相结合。使用Dice相似系数(DSC)和95%豪斯多夫距离(HD95)评估分割模型的性能。使用一致性指数评估生存模型的性能。

结果

CoTr模型在对比增强T1加权(ceT1W)和T2加权(T2W)成像分割任务中均表现良好,平均DSC分别为0.827和0.820,优于其他CNN模型,如U-Net(DSC:0.807和0.808)、注意力U-Net(DSC:0.814和0.811)和V-Net(DSC:0.805和0.807)。对于生存预测,所提出的深度学习模型显著优于传统方法,一致性指数为0.732。此外,它有效地将患者分为疾病进展的低风险和高风险组(P<0.001)。

结论

将Transformer架构与CNN相结合可以改善MRI肿瘤分割,并且与传统方法相比,这种深度学习模型在CC患者的生存预测方面表现出色。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d6f9/11320496/0d5522a450d1/qims-14-08-5408-f1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验