Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China.
J Imaging Inform Med. 2024 Jun;37(3):1160-1176. doi: 10.1007/s10278-024-01001-4. Epub 2024 Feb 7.
In intraoperative brain cancer procedures, real-time diagnosis is essential for ensuring safe and effective care. The prevailing workflow, which relies on histological staining with hematoxylin and eosin (H&E) for tissue processing, is resource-intensive, time-consuming, and requires considerable labor. Recently, an innovative approach combining stimulated Raman histology (SRH) and deep convolutional neural networks (CNN) has emerged, creating a new avenue for real-time cancer diagnosis during surgery. While this approach exhibits potential, there exists an opportunity for refinement in the domain of feature extraction. In this study, we employ coherent Raman scattering imaging method and a self-supervised deep learning model (VQVAE2) to enhance the speed of SRH image acquisition and feature representation, thereby enhancing the capability of automated real-time bedside diagnosis. Specifically, we propose the VQSRS network, which integrates vector quantization with a proxy task based on patch annotation for analysis of brain tumor subtypes. Training on images collected from the SRS microscopy system, our VQSRS demonstrates a significant speed enhancement over traditional techniques (e.g., 20-30 min). Comparative studies in dimensionality reduction clustering confirm the diagnostic capacity of VQSRS rivals that of CNN. By learning a hierarchical structure of recognizable histological features, VQSRS classifies major tissue pathological categories in brain tumors. Additionally, an external semantic segmentation method is applied for identifying tumor-infiltrated regions in SRH images. Collectively, these findings indicate that this automated real-time prediction technique holds the potential to streamline intraoperative cancer diagnosis, providing assistance to pathologists in simplifying the process.
在术中脑癌手术中,实时诊断对于确保安全有效的护理至关重要。目前的工作流程依赖于苏木精和伊红(H&E)组织处理的组织学染色,这种方法既耗费资源又耗时,且需要大量劳动力。最近,一种结合受激拉曼组织学(SRH)和深度卷积神经网络(CNN)的创新方法已经出现,为手术中的实时癌症诊断开辟了新途径。尽管这种方法具有潜力,但在特征提取领域仍有改进的空间。在这项研究中,我们采用相干拉曼散射成像方法和一种自监督深度学习模型(VQVAE2)来提高 SRH 图像采集和特征表示的速度,从而增强自动化实时床边诊断的能力。具体来说,我们提出了 VQSRS 网络,该网络将向量量化与基于补丁注释的代理任务相结合,用于分析脑肿瘤亚型。在 SRS 显微镜系统采集的图像上进行训练,我们的 VQSRS 与传统技术相比显著提高了速度(例如,20-30 分钟)。在降维聚类的比较研究中,VQSRS 的诊断能力可与 CNN 相媲美。通过学习可识别的组织学特征的层次结构,VQSRS 对脑肿瘤的主要组织病理类别进行分类。此外,还应用了一种外部语义分割方法来识别 SRH 图像中的肿瘤浸润区域。总之,这些发现表明,这种自动化实时预测技术有可能简化术中癌症诊断,为病理学家简化流程提供帮助。