Song Andrew H, Williams Mane, Williamson Drew F K, Jaume Guillaume, Zhang Andrew, Chen Bowen, Serafin Robert, Liu Jonathan T C, Baras Alex, Parwani Anil V, Mahmood Faisal
Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
ArXiv. 2023 Jul 27:arXiv:2307.14907v1.
Human tissue consists of complex structures that display a diversity of morphologies, forming a tissue microenvironment that is, by nature, three-dimensional (3D). However, the current standard-of-care involves slicing 3D tissue specimens into two-dimensional (2D) sections and selecting a few for microscopic evaluation, with concomitant risks of sampling bias and misdiagnosis. To this end, there have been intense efforts to capture 3D tissue morphology and transition to 3D pathology, with the development of multiple high-resolution 3D imaging modalities. However, these tools have had little translation to clinical practice as manual evaluation of such large data by pathologists is impractical and there is a lack of computational platforms that can efficiently process the 3D images and provide patient-level clinical insights. Here we present Modality-Agnostic Multiple instance learning for volumetric Block Analysis (MAMBA), a deep-learning-based platform for processing 3D tissue images from diverse imaging modalities and predicting patient outcomes. Archived prostate cancer specimens were imaged with open-top light-sheet microscopy or microcomputed tomography and the resulting 3D datasets were used to train risk-stratification networks based on 5-year biochemical recurrence outcomes via MAMBA. With the 3D block-based approach, MAMBA achieves an area under the receiver operating characteristic curve (AUC) of 0.86 and 0.74, superior to 2D traditional single-slice-based prognostication (AUC of 0.79 and 0.57), suggesting superior prognostication with 3D morphological features. Further analyses reveal that the incorporation of greater tissue volume improves prognostic performance and mitigates risk prediction variability from sampling bias, suggesting that there is value in capturing larger extents of spatially heterogeneous 3D morphology. With the rapid growth and adoption of 3D spatial biology and pathology techniques by researchers and clinicians, MAMBA provides a general and efficient framework for 3D weakly supervised learning for clinical decision support and can help to reveal novel 3D morphological biomarkers for prognosis and therapeutic response.
人体组织由复杂结构组成,呈现出多样的形态,形成了本质上为三维(3D)的组织微环境。然而,当前的标准治疗方法是将3D组织标本切成二维(2D)切片,并选择少数切片进行显微镜评估,这伴随着抽样偏差和误诊的风险。为此,人们付出了巨大努力来捕捉3D组织形态并向3D病理学转变,开发了多种高分辨率3D成像模式。然而,这些工具在临床实践中的应用很少,因为病理学家对手动评估如此大量的数据不切实际,而且缺乏能够有效处理3D图像并提供患者水平临床见解的计算平台。在此,我们展示了用于体积块分析的模态无关多实例学习(MAMBA),这是一个基于深度学习的平台,用于处理来自不同成像模式的3D组织图像并预测患者预后。对存档的前列腺癌标本进行了开放式光片显微镜或微型计算机断层扫描成像,并通过MAMBA将所得的3D数据集用于基于5年生化复发结果训练风险分层网络。通过基于3D块的方法,MAMBA在受试者工作特征曲线(AUC)下的面积分别达到0.86和0.74,优于基于2D传统单切片的预测(AUC分别为0.79和0.57),表明具有3D形态特征的预测更优。进一步分析表明,纳入更大的组织体积可提高预后性能并减轻抽样偏差导致的风险预测变异性,这表明捕捉更大范围的空间异质性3D形态具有价值。随着研究人员和临床医生对3D空间生物学和病理学技术的快速发展和采用,MAMBA为用于临床决策支持的3D弱监督学习提供了一个通用且高效的框架,并有助于揭示用于预后和治疗反应的新型3D形态生物标志物。