Suppr超能文献

基于全切片图像的弱监督学习进行前列腺癌的自动诊断和分级。

Automatic diagnosis and grading of Prostate Cancer with weakly supervised learning on whole slide images.

机构信息

AI Lab, Tencent, Shenzhen, China.

College of Computer Science, Sichuan University, Chengdu, China.

出版信息

Comput Biol Med. 2023 Jan;152:106340. doi: 10.1016/j.compbiomed.2022.106340. Epub 2022 Nov 21.

Abstract

BACKGROUND

The workflow of prostate cancer diagnosis and grading is cumbersome and the results suffer from substantial inter-observer variability. Recent trials have shown potential in using machine learning to develop automated systems to address this challenge. Most automated deep learning systems for prostate cancer Gleason grading focused on supervised learning requiring demanding fine-grained pixel-level annotations.

METHODS

A weakly-supervised deep learning model with slide-level labels is presented in this study for the diagnosis and grading of prostate cancer with whole slide image (WSI). WSIs are first cropped into small patches and then processed with a deep learning model to extract patch-level features. A graph convolution network (GCN) is used to aggregate the features for classifications. Throughout the training process, the noisy labels are progressively filtered out to reduce inter-observer variations in clinical reports. Finally, multi-center independent test cohorts with 6,174 slides are collected to evaluate the prostate cancer diagnosis and grading performance of our model.

RESULTS

The cancer diagnosis (2-level classification) results on two external test sets (n= 4,675, n= 844) show an area under the receiver operating characteristic curve (AUC) of 0.985 and 0.986. The Gleason grading (6-level classification) results reach 0.931 quadratic weighted kappa on the internal test set (n= 531). It generalizes well on the external test dataset (n= 844) with 0.801 quadratic weighted kappa with the reference standard set independently. The model enables pathological meaningful interpretability by visualizing the most attended lesions which are highly consistent with expert annotations.

CONCLUSION

The proposed model incorporates a graph network in weakly supervised learning with only slide-level reports. A robust learning strategy is also employed to correct the label noise. It is highly accurate (>0.985 AUC for diagnosis) and also interpretable with intuitive heatmap visualization. It can be unified with a digital pathology pipeline to deliver prostate cancer metrics for a pathology report.

摘要

背景

前列腺癌诊断和分级的工作流程繁琐,结果存在很大的观察者间变异性。最近的试验表明,使用机器学习开发自动化系统来解决这一挑战具有潜力。大多数用于前列腺癌 Gleason 分级的自动化深度学习系统都侧重于监督学习,需要精细的像素级注释。

方法

本研究提出了一种基于幻灯片级标签的弱监督深度学习模型,用于使用全幻灯片图像(WSI)诊断和分级前列腺癌。首先将 WSI 裁剪成小的补丁,然后用深度学习模型处理这些补丁以提取补丁级特征。使用图卷积网络(GCN)对特征进行聚合以进行分类。在整个训练过程中,逐渐过滤掉有噪声的标签,以减少临床报告中的观察者间差异。最后,收集了包含 6174 张幻灯片的多中心独立测试队列,以评估我们模型的前列腺癌诊断和分级性能。

结果

两个外部测试集(n=4675,n=844)的癌症诊断(2 级分类)结果显示,接收者操作特征曲线下的面积(AUC)为 0.985 和 0.986。内部测试集(n=531)的 Gleason 分级(6 级分类)结果达到 0.931 二次加权kappa。独立参考标准集的外部测试数据集(n=844)的结果为 0.801 二次加权kappa,具有很好的泛化能力。该模型通过可视化最受关注的病变来实现病理上有意义的可解释性,这些病变与专家注释高度一致。

结论

该模型在仅有幻灯片级报告的弱监督学习中引入了图网络。还采用了稳健的学习策略来纠正标签噪声。它具有很高的准确性(诊断时 AUC>0.985),并且具有直观的热图可视化,也具有可解释性。它可以与数字病理学流水线统一,为病理报告提供前列腺癌指标。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验