Suppr超能文献

基于多模态深度迁移学习的MRI和CT预测直肠癌肿瘤芽生分级:一项双中心研究。

Predicting rectal cancer tumor budding grading based on MRI and CT with multimodal deep transfer learning: A dual-center study.

作者信息

Liu Ziyan, Jia Jianye, Bai Fan, Ding Yuxin, Han Lei, Bai Genji

机构信息

Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China.

Deparment of Medical Imaging, Huaian Hospital Affiliated to Xuzhou Medical University, Huaian, Jiangsu, China.

出版信息

Heliyon. 2024 Mar 26;10(7):e28769. doi: 10.1016/j.heliyon.2024.e28769. eCollection 2024 Apr 15.

Abstract

OBJECTIVE

To investigate the effectiveness of a multimodal deep learning model in predicting tumor budding (TB) grading in rectal cancer (RC) patients.

MATERIALS AND METHODS

A retrospective analysis was conducted on 355 patients with rectal adenocarcinoma from two different hospitals. Among them, 289 patients from our institution were randomly divided into an internal training cohort (n = 202) and an internal validation cohort (n = 87) in a 7:3 ratio, while an additional 66 patients from another hospital constituted an external validation cohort. Various deep learning models were constructed and compared for their performance using T1CE and CT-enhanced images, and the optimal models were selected for the creation of a multimodal fusion model. Based on single and multiple factor logistic regression, clinical N staging and fecal occult blood were identified as independent risk factors and used to construct the clinical model. A decision-level fusion was employed to integrate these two models to create an ensemble model. The predictive performance of each model was evaluated using the area under the curve (AUC), DeLong's test, calibration curve, and decision curve analysis (DCA). Model visualization Gradient-weighted Class Activation Mapping (Grad-CAM) was performed for model interpretation.

RESULTS

The multimodal fusion model demonstrated superior performance compared to single-modal models, with AUC values of 0.869 (95% CI: 0.761-0.976) for the internal validation cohort and 0.848 (95% CI: 0.721-0.975) for the external validation cohort. N-stage and fecal occult blood were identified as clinically independent risk factors through single and multivariable logistic regression analysis. The final ensemble model exhibited the best performance, with AUC values of 0.898 (95% CI: 0.820-0.975) for the internal validation cohort and 0.868 (95% CI: 0.768-0.968) for the external validation cohort.

CONCLUSION

Multimodal deep learning models can effectively and non-invasively provide individualized predictions for TB grading in RC patients, offering valuable guidance for treatment selection and prognosis assessment.

摘要

目的

探讨多模态深度学习模型预测直肠癌(RC)患者肿瘤芽生(TB)分级的有效性。

材料与方法

对来自两家不同医院的355例直肠腺癌患者进行回顾性分析。其中,来自本机构的289例患者按7:3的比例随机分为内部训练队列(n = 202)和内部验证队列(n = 87),另外来自另一家医院的66例患者构成外部验证队列。使用T1CE和CT增强图像构建各种深度学习模型并比较其性能,选择最优模型创建多模态融合模型。基于单因素和多因素逻辑回归,将临床N分期和粪便潜血确定为独立危险因素并用于构建临床模型。采用决策级融合将这两个模型整合以创建一个集成模型。使用曲线下面积(AUC)、德龙检验、校准曲线和决策曲线分析(DCA)评估每个模型的预测性能。对模型进行可视化梯度加权类激活映射(Grad-CAM)以进行模型解释。

结果

多模态融合模型表现出优于单模态模型的性能,内部验证队列的AUC值为0.869(95%CI:0.761 - 0.976),外部验证队列的AUC值为0.848(95%CI:0.721 - 0.975)。通过单因素和多变量逻辑回归分析,N分期和粪便潜血被确定为临床独立危险因素。最终的集成模型表现最佳,内部验证队列的AUC值为0.898(95%CI:0.820 - 0.975),外部验证队列的AUC值为0.868(95%CI:0.768 - 0.968)。

结论

多模态深度学习模型可以有效且无创地为RC患者的TB分级提供个性化预测,为治疗选择和预后评估提供有价值的指导。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8c0e/11000007/9049151ac4b1/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验