Suppr超能文献

一种基于深度学习的新算法,结合组织学特征和组织面积,从全切片图像预测结直肠癌的生存情况。

A novel deep learning-based algorithm combining histopathological features with tissue areas to predict colorectal cancer survival from whole-slide images.

机构信息

Institute of Medical Informatics, National Cheng Kung University, Tainan, 70101, Taiwan.

Department of Computer Science and Information Engineering, National Chi Nan University, Nantou, 545301, Taiwan.

出版信息

J Transl Med. 2023 Oct 17;21(1):731. doi: 10.1186/s12967-023-04530-8.

Abstract

BACKGROUND

Many methodologies for selecting histopathological images, such as sample image patches or segment histology from regions of interest (ROIs) or whole-slide images (WSIs), have been utilized to develop survival models. With gigapixel WSIs exhibiting diverse histological appearances, obtaining clinically prognostic and explainable features remains challenging. Therefore, we propose a novel deep learning-based algorithm combining tissue areas with histopathological features to predict cancer survival.

METHODS

The Cancer Genome Atlas Colon Adenocarcinoma (TCGA-COAD) dataset was used in this investigation. A deep convolutional survival model (DeepConvSurv) extracted histopathological information from the image patches of nine different tissue types, including tumors, lymphocytes, stroma, and mucus. The tissue map of the WSIs was segmented using image processing techniques that involved localizing and quantifying the tissue region. Six survival models with the concordance index (C-index) were used as the evaluation metrics.

RESULTS

We extracted 128 histopathological features from four histological types and five tissue area features from WSIs to predict colorectal cancer survival. Our method performed better in six distinct survival models than the Whole Slide Histopathological Images Survival Analysis framework (WSISA), which adaptively sampled patches using K-means from WSIs. The best performance using histopathological features was 0.679 using LASSO-Cox. Compared to histopathological features alone, tissue area features increased the C-index by 2.5%. Based on histopathological features and tissue area features, our approach achieved performance of 0.704 with RIDGE-Cox.

CONCLUSIONS

A deep learning-based algorithm combining histopathological features with tissue area proved clinically relevant and effective for predicting cancer survival.

摘要

背景

许多用于选择组织病理学图像的方法,例如从感兴趣区域(ROI)或全切片图像(WSI)中选择样本图像补丁或分割组织病理学,已经被用于开发生存模型。由于千兆像素的 WSI 表现出多样化的组织学外观,因此获得具有临床预后和可解释性的特征仍然具有挑战性。因此,我们提出了一种新的基于深度学习的算法,将组织区域与组织病理学特征相结合,以预测癌症的生存情况。

方法

本研究使用了癌症基因组图谱结肠腺癌(TCGA-COAD)数据集。一个深度卷积生存模型(DeepConvSurv)从包括肿瘤、淋巴细胞、基质和黏液在内的九种不同组织类型的图像补丁中提取组织病理学信息。使用图像处理技术对 WSI 的组织图谱进行分割,该技术涉及定位和量化组织区域。使用一致性指数(C-index)作为评估指标的六个生存模型。

结果

我们从 WSI 中提取了四个组织学类型的 128 个组织病理学特征和五个组织区域特征,以预测结直肠癌的生存情况。与自适应地从 WSI 中使用 K-means 采样补丁的全幻灯片组织病理学图像生存分析框架(WSISA)相比,我们的方法在六个不同的生存模型中的表现更好。使用 LASSO-Cox 的最佳性能为 0.679。与组织病理学特征单独使用相比,组织区域特征使 C-index 提高了 2.5%。基于组织病理学特征和组织区域特征,我们的方法使用 RIDGE-Cox 实现了 0.704 的性能。

结论

将组织病理学特征与组织区域相结合的基于深度学习的算法在预测癌症生存方面具有临床相关性和有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9ba4/10580604/b1371005f9a1/12967_2023_4530_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验