Suppr超能文献

基于注意力机制的多参数磁共振成像集成模型用于预测直肠癌患者的肿瘤芽生分级

Attention mechanism-based multi-parametric MRI ensemble model for predicting tumor budding grade in rectal cancer patients.

作者信息

Jia Jianye, Kang Yue, Wang Jiahao, Bai Fan, Han Lei, Niu Yantao

机构信息

Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China.

Department of Radiology, Bejing Tongren Hospital,Capital Medical University, Beijing, China.

出版信息

Abdom Radiol (NY). 2025 Apr 1. doi: 10.1007/s00261-025-04886-z.

Abstract

PURPOSE

To develop and validate a deep learning-based feature ensemble model using multiparametric magnetic resonance imaging (MRI) for predicting tumor budding (TB) grading in patients with rectal cancer (RC).

METHODS

A retrospective cohort of 458 patients with pathologically confirmed rectal cancer (RC) from three institutions was included. Among them, 355 patients from Center 1 were divided into two groups at a 7:3 ratio: the training cohort (n = 248) and the internal validation cohort (n = 107). An additional 103 patients from two other centers served as the external validation cohort. Deep learning models were constructed for T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) based on the CrossFormer architecture, and deep learning features were extracted. Subsequently, a feature ensemble module based on the attention mechanism of Transformer was used to capture spatial interactions between different imaging sequences, creating a multiparametric ensemble model. The predictive performance of each model was evaluated using the area under the curve (AUC), calibration curves, and decision curve analysis (DCA).

RESULTS

The deep learning model based on T2WI achieved AUC values of 0.789 (95% CI: 0.680-0.900) and 0.720 (95% CI: 0.591-0.849) in the internal and external validation cohorts, respectively. The deep learning model based on DWI had AUC values of 0.806 (95% CI: 0.705-0.908) and 0.772 (95% CI: 0.657-0.887) in the internal and external validation cohorts, respectively. The multiparametric ensemble model demonstrated superior performance, with AUC values of 0.868 (95% CI: 0.775-0.960) in the internal validation cohort and 0.839 (95% CI: 0.743-0.935) in the external validation cohort. DeLong test showed that the differences in AUC values among the models were not statistically significant in both the internal and external test sets (P > 0.05). The DCA curve demonstrated that within the 10-80% threshold range, the fusion model provided significantly higher clinical net benefit compared to other models.

CONCLUSION

Compared to single-sequence deep learning models, the attention mechanism-based multiparametric MRI fusion model enables more effective individualized prediction of TB grading in RC patients. It offers valuable guidance for treatment selection and prognostic evaluation while providing imaging-based support for personalized postoperative follow-up adjustments.

摘要

目的

开发并验证一种基于深度学习的特征整合模型,该模型使用多参数磁共振成像(MRI)来预测直肠癌(RC)患者的肿瘤芽生(TB)分级。

方法

纳入了来自三个机构的458例经病理证实的直肠癌(RC)患者的回顾性队列。其中,来自中心1的355例患者按7:3的比例分为两组:训练队列(n = 248)和内部验证队列(n = 107)。另外来自其他两个中心的103例患者作为外部验证队列。基于CrossFormer架构构建了用于T2加权成像(T2WI)和扩散加权成像(DWI)的深度学习模型,并提取了深度学习特征。随后,使用基于Transformer注意力机制的特征整合模块来捕捉不同成像序列之间的空间相互作用,创建了一个多参数整合模型。使用曲线下面积(AUC)、校准曲线和决策曲线分析(DCA)来评估每个模型的预测性能。

结果

基于T2WI的深度学习模型在内部和外部验证队列中的AUC值分别为0.789(95%CI:0.680 - 0.900)和0.720(95%CI:0.591 - 0.849)。基于DWI的深度学习模型在内部和外部验证队列中的AUC值分别为0.806(95%CI:0.705 - 0.908)和0.772(95%CI:0.657 - 0.887)。多参数整合模型表现出卓越的性能,在内部验证队列中的AUC值为0.868(95%CI:0.775 - 0.960),在外部验证队列中的AUC值为0.839(95%CI:0.743 - 0.935)。DeLong检验表明,在内部和外部测试集中,各模型之间的AUC值差异无统计学意义(P > 0.05)。DCA曲线表明,在10 - 80%的阈值范围内,融合模型比其他模型提供了显著更高的临床净效益。

结论

与单序列深度学习模型相比,基于注意力机制的多参数MRI融合模型能够更有效地对RC患者的TB分级进行个体化预测。它为治疗选择和预后评估提供了有价值 的指导,同时为个性化的术后随访调整提供了基于影像的支持。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验