Suppr超能文献

基于深度学习的人体标本无标记光声组织学中的虚拟染色、分割和分类

Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens.

作者信息

Yoon Chiho, Park Eunwoo, Misra Sampa, Kim Jin Young, Baik Jin Woo, Kim Kwang Gi, Jung Chan Kwon, Kim Chulhong

机构信息

Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Science and Engineering, Graduate School of Artificial Intelligence, and Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, Republic of Korea.

Opticho Inc., Pohang, Republic of Korea.

出版信息

Light Sci Appl. 2024 Sep 2;13(1):226. doi: 10.1038/s41377-024-01554-7.

Abstract

In pathological diagnostics, histological images highlight the oncological features of excised specimens, but they require laborious and costly staining procedures. Despite recent innovations in label-free microscopy that simplify complex staining procedures, technical limitations and inadequate histological visualization are still problems in clinical settings. Here, we demonstrate an interconnected deep learning (DL)-based framework for performing automated virtual staining, segmentation, and classification in label-free photoacoustic histology (PAH) of human specimens. The framework comprises three components: (1) an explainable contrastive unpaired translation (E-CUT) method for virtual H&E (VHE) staining, (2) an U-net architecture for feature segmentation, and (3) a DL-based stepwise feature fusion method (StepFF) for classification. The framework demonstrates promising performance at each step of its application to human liver cancers. In virtual staining, the E-CUT preserves the morphological aspects of the cell nucleus and cytoplasm, making VHE images highly similar to real H&E ones. In segmentation, various features (e.g., the cell area, number of cells, and the distance between cell nuclei) have been successfully segmented in VHE images. Finally, by using deep feature vectors from PAH, VHE, and segmented images, StepFF has achieved a 98.00% classification accuracy, compared to the 94.80% accuracy of conventional PAH classification. In particular, StepFF's classification reached a sensitivity of 100% based on the evaluation of three pathologists, demonstrating its applicability in real clinical settings. This series of DL methods for label-free PAH has great potential as a practical clinical strategy for digital pathology.

摘要

在病理诊断中,组织学图像突出了切除标本的肿瘤特征,但它们需要繁琐且昂贵的染色程序。尽管最近无标记显微镜技术有所创新,简化了复杂的染色程序,但技术限制和组织学可视化不足在临床环境中仍然是问题。在这里,我们展示了一个基于深度学习(DL)的互联框架,用于在人体标本的无标记光声组织学(PAH)中执行自动虚拟染色、分割和分类。该框架由三个部分组成:(1)用于虚拟苏木精-伊红(VHE)染色的可解释对比非配对翻译(E-CUT)方法,(2)用于特征分割的U-net架构,以及(3)用于分类的基于DL的逐步特征融合方法(StepFF)。该框架在应用于人类肝癌的每个步骤中都表现出了有前景的性能。在虚拟染色中,E-CUT保留了细胞核和细胞质的形态特征,使VHE图像与真实的苏木精-伊红图像高度相似。在分割中,VHE图像中成功分割出了各种特征(例如,细胞面积、细胞数量和细胞核之间的距离)。最后,通过使用来自PAH、VHE和分割图像的深度特征向量,与传统PAH分类94.80%的准确率相比,StepFF实现了98.00%的分类准确率。特别是,基于三位病理学家的评估,StepFF的分类灵敏度达到了100%,证明了其在实际临床环境中的适用性。这一系列用于无标记PAH的DL方法作为数字病理学的一种实用临床策略具有巨大潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/723b/11369251/9c26c15be328/41377_2024_1554_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验