Wang Rui, Chow Sarah S L, Serafin Robert B, Xie Weisi, Han Qinghua, Baraznenok Elena, Lan Lydia, Bishop Kevin W, Liu Jonathan T C
University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States.
University of Washington, Department of Bioengineering, Seattle, Washington, United States.
J Biomed Opt. 2024 Mar;29(3):036001. doi: 10.1117/1.JBO.29.3.036001. Epub 2024 Mar 1.
In recent years, we and others have developed non-destructive methods to obtain three-dimensional (3D) pathology datasets of clinical biopsies and surgical specimens. For prostate cancer risk stratification (prognostication), standard-of-care Gleason grading is based on examining the morphology of prostate glands in thin 2D sections. This motivates us to perform 3D segmentation of prostate glands in our 3D pathology datasets for the purposes of computational analysis of 3D glandular features that could offer improved prognostic performance.
To facilitate prostate cancer risk assessment, we developed a computationally efficient and accurate deep learning model for 3D gland segmentation based on open-top light-sheet microscopy datasets of human prostate biopsies stained with a fluorescent analog of hematoxylin and eosin (H&E).
For 3D gland segmentation based on our H&E-analog 3D pathology datasets, we previously developed a hybrid deep learning and computer vision-based pipeline, called image translation-assisted segmentation in 3D (ITAS3D), which required a complex two-stage procedure and tedious manual optimization of parameters. To simplify this procedure, we use the 3D gland-segmentation masks previously generated by ITAS3D as training datasets for a direct end-to-end deep learning-based segmentation model, nnU-Net. The inputs to this model are 3D pathology datasets of prostate biopsies rapidly stained with an inexpensive fluorescent analog of H&E and the outputs are 3D semantic segmentation masks of the gland epithelium, gland lumen, and surrounding stromal compartments within the tissue.
nnU-Net demonstrates remarkable accuracy in 3D gland segmentations even with limited training data. Moreover, compared with the previous ITAS3D pipeline, nnU-Net operation is simpler and faster, and it can maintain good accuracy even with lower-resolution inputs.
Our trained DL-based 3D segmentation model will facilitate future studies to demonstrate the value of computational 3D pathology for guiding critical treatment decisions for patients with prostate cancer.
近年来,我们和其他研究团队开发了非破坏性方法来获取临床活检和手术标本的三维(3D)病理学数据集。对于前列腺癌风险分层(预后评估),标准的Gleason分级是基于在薄二维切片中检查前列腺腺体的形态。这促使我们在3D病理学数据集中对前列腺腺体进行3D分割,以便对3D腺体特征进行计算分析,从而提高预后评估性能。
为便于进行前列腺癌风险评估,我们基于苏木精和伊红(H&E)荧光类似物染色的人类前列腺活检的开放式光片显微镜数据集,开发了一种计算效率高且准确的用于3D腺体分割的深度学习模型。
对于基于我们的H&E类似物3D病理学数据集的3D腺体分割,我们之前开发了一种基于深度学习和计算机视觉的混合流程,称为三维图像翻译辅助分割(ITAS3D),该流程需要复杂的两阶段程序和繁琐的参数手动优化。为简化此程序,我们将之前由ITAS3D生成的3D腺体分割掩码用作基于深度学习的直接端到端分割模型nnU-Net的训练数据集。该模型的输入是用廉价的H&E荧光类似物快速染色的前列腺活检的3D病理学数据集,输出是组织内腺体上皮、腺腔和周围基质区域的3D语义分割掩码。
即使训练数据有限,nnU-Net在3D腺体分割中也表现出显著的准确性。此外,与之前的ITAS3D流程相比,nnU-Net操作更简单、更快,并且即使输入分辨率较低也能保持良好的准确性。
我们训练的基于深度学习的3D分割模型将有助于未来的研究,以证明计算3D病理学在指导前列腺癌患者关键治疗决策方面的价值。