Suppr超能文献

利用多模态深度学习从组织病理学图像和临床信息预测直肠癌预后。

Predicting rectal cancer prognosis from histopathological images and clinical information using multi-modal deep learning.

作者信息

Xu Yixin, Guo Jiedong, Yang Na, Zhu Can, Zheng Tianlei, Zhao Weiguo, Liu Jia, Song Jun

机构信息

Department of General Surgery, The Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu, China.

Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, China.

出版信息

Front Oncol. 2024 Apr 15;14:1353446. doi: 10.3389/fonc.2024.1353446. eCollection 2024.

Abstract

OBJECTIVE

The objective of this study was to provide a multi-modal deep learning framework for forecasting the survival of rectal cancer patients by utilizing both digital pathological images data and non-imaging clinical data.

MATERIALS AND METHODS

The research included patients diagnosed with rectal cancer by pathological confirmation from January 2015 to December 2016. Patients were allocated to training and testing sets in a randomized manner, with a ratio of 4:1. The tissue microarrays (TMAs) and clinical indicators were obtained. Subsequently, we selected distinct deep learning models to individually forecast patient survival. We conducted a scanning procedure on the TMAs in order to transform them into digital pathology pictures. Additionally, we performed pre-processing on the clinical data of the patients. Subsequently, we selected distinct deep learning algorithms to conduct survival prediction analysis using patients' pathological images and clinical data, respectively.

RESULTS

A total of 292 patients with rectal cancer were randomly allocated into two groups: a training set consisting of 234 cases, and a testing set consisting of 58 instances. Initially, we make direct predictions about the survival status by using pre-processed Hematoxylin and Eosin (H&E) pathological images of rectal cancer. We utilized the ResNest model to extract data from histopathological images of patients, resulting in a survival status prediction with an AUC (Area Under the Curve) of 0.797. Furthermore, we employ a multi-head attention fusion (MHAF) model to combine image features and clinical features in order to accurately forecast the survival rate of rectal cancer patients. The findings of our experiment show that the multi-modal structure works better than directly predicting from histopathological images. It achieves an AUC of 0.837 in predicting overall survival (OS).

CONCLUSIONS

Our study highlights the potential of multi-modal deep learning models in predicting survival status from histopathological images and clinical information, thus offering valuable insights for clinical applications.

摘要

目的

本研究的目的是提供一个多模态深度学习框架,通过利用数字病理图像数据和非影像临床数据来预测直肠癌患者的生存情况。

材料与方法

本研究纳入了2015年1月至2016年12月经病理确诊为直肠癌的患者。患者以4:1的比例随机分配到训练集和测试集。获取组织微阵列(TMA)和临床指标。随后,我们选择不同的深度学习模型分别预测患者生存情况。我们对TMA进行扫描程序,以便将其转化为数字病理图片。此外,我们对患者的临床数据进行预处理。随后,我们选择不同的深度学习算法,分别使用患者的病理图像和临床数据进行生存预测分析。

结果

共有292例直肠癌患者被随机分为两组:训练集234例,测试集58例。最初,我们通过使用预处理后的直肠癌苏木精和伊红(H&E)病理图像直接预测生存状态。我们利用ResNest模型从患者的组织病理学图像中提取数据,生存状态预测的曲线下面积(AUC)为0.797。此外,我们采用多头注意力融合(MHAF)模型来结合图像特征和临床特征,以准确预测直肠癌患者的生存率。我们的实验结果表明,多模态结构比直接从组织病理学图像进行预测效果更好。在预测总生存期(OS)时,其AUC达到0.837。

结论

我们的研究突出了多模态深度学习模型在从组织病理学图像和临床信息预测生存状态方面的潜力,从而为临床应用提供了有价值的见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8189/11060749/5495e5aee25c/fonc-14-1353446-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验