Suppr超能文献

利用深度学习技术将弥漫性大 B 细胞淋巴瘤组织微阵列中 c-MYC 和 BCL2 的预后定量结果转化为全切片图像。

Translating prognostic quantification of c-MYC and BCL2 from tissue microarrays to whole slide images in diffuse large B-cell lymphoma using deep learning.

机构信息

Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA.

Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA.

出版信息

Diagn Pathol. 2024 Jan 19;19(1):17. doi: 10.1186/s13000-023-01425-6.

Abstract

BACKGROUND

c-MYC and BCL2 positivity are important prognostic factors for diffuse large B-cell lymphoma. However, manual quantification is subject to significant intra- and inter-observer variability. We developed an automated method for quantification in whole-slide images of tissue sections where manual quantification requires evaluating large areas of tissue with possibly heterogeneous staining. We train this method using annotations of tumor positivity in smaller tissue microarray cores where expression and staining are more homogeneous and then translate this model to whole-slide images.

METHODS

Our method applies a technique called attention-based multiple instance learning to regress the proportion of c-MYC-positive and BCL2-positive tumor cells from pathologist-scored tissue microarray cores. This technique does not require annotation of individual cell nuclei and is trained instead on core-level annotations of percent tumor positivity. We translate this model to scoring of whole-slide images by tessellating the slide into smaller core-sized tissue regions and calculating an aggregate score. Our method was trained on a public tissue microarray dataset from Stanford and applied to whole-slide images from a geographically diverse multi-center cohort produced by the Lymphoma Epidemiology of Outcomes study.

RESULTS

In tissue microarrays, the automated method had Pearson correlations of 0.843 and 0.919 with pathologist scores for c-MYC and BCL2, respectively. When utilizing standard clinical thresholds, the sensitivity/specificity of our method was 0.743 / 0.963 for c-MYC and 0.938 / 0.951 for BCL2. For double-expressors, sensitivity and specificity were 0.720 and 0.974. When translated to the external WSI dataset scored by two pathologists, Pearson correlation was 0.753 & 0.883 for c-MYC and 0.749 & 0.765 for BCL2, and sensitivity/specificity was 0.857/0.991 & 0.706/0.930 for c-MYC, 0.856/0.719 & 0.855/0.690 for BCL2, and 0.890/1.00 & 0.598/0.952 for double-expressors. Survival analysis demonstrates that for progression-free survival, model-predicted TMA scores significantly stratify double-expressors and non double-expressors (p = 0.0345), whereas pathologist scores do not (p = 0.128).

CONCLUSIONS

We conclude that proportion of positive stains can be regressed using attention-based multiple instance learning, that these models generalize well to whole slide images, and that our models can provide non-inferior stratification of progression-free survival outcomes.

摘要

背景

c-MYC 和 BCL2 阳性是弥漫性大 B 细胞淋巴瘤的重要预后因素。然而,手动定量存在显著的观察者内和观察者间变异性。我们开发了一种自动化方法,用于对组织切片的全幻灯片图像进行定量,而手动定量需要评估可能具有异质性染色的大面积组织。我们使用肿瘤阳性的组织微阵列核心中的注释来训练这种方法,其中表达和染色更均匀,然后将这种模型转化为全幻灯片图像。

方法

我们的方法应用了一种称为基于注意力的多实例学习的技术,从病理学家评分的组织微阵列核心中回归 c-MYC 和 BCL2 阳性肿瘤细胞的比例。这种技术不需要注释单个细胞核,而是在核心级别的肿瘤阳性百分比注释上进行训练。我们通过将幻灯片划分为较小的核心大小的组织区域并计算总分数,将该模型转换为全幻灯片图像的评分。我们的方法在斯坦福大学的公共组织微阵列数据集上进行了训练,并应用于由淋巴瘤结局研究的地理上多样化的多中心队列产生的全幻灯片图像。

结果

在组织微阵列中,自动方法与病理学家对 c-MYC 和 BCL2 的评分的 Pearson 相关性分别为 0.843 和 0.919。当使用标准临床阈值时,我们的方法对于 c-MYC 的敏感性/特异性为 0.743/0.963,对于 BCL2 的敏感性/特异性为 0.938/0.951。对于双表达者,敏感性和特异性分别为 0.720 和 0.974。当转换为由两位病理学家评分的外部 WSI 数据集时,c-MYC 的 Pearson 相关性为 0.753 和 0.883,BCL2 的 Pearson 相关性为 0.749 和 0.765,c-MYC 的敏感性/特异性为 0.857/0.991 和 0.706/0.930,BCL2 的敏感性/特异性为 0.856/0.719 和 0.855/0.690,双表达者的敏感性/特异性为 0.890/1.00 和 0.598/0.952。生存分析表明,对于无进展生存期,模型预测的 TMA 评分可显著分层双表达者和非双表达者(p=0.0345),而病理学家评分则不能(p=0.128)。

结论

我们得出结论,阳性染色的比例可以使用基于注意力的多实例学习来回归,这些模型可以很好地推广到全幻灯片图像,并且我们的模型可以提供无进展生存结果的非劣分层。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f7a3/10797911/ef857ea77b92/13000_2023_1425_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验