Suppr超能文献

利用空间光干涉显微镜数据的深度学习进行自动结直肠癌筛查。

Automatic Colorectal Cancer Screening Using Deep Learning in Spatial Light Interference Microscopy Data.

机构信息

Quantitative Light Imaging Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.

Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.

出版信息

Cells. 2022 Feb 17;11(4):716. doi: 10.3390/cells11040716.

Abstract

The surgical pathology workflow currently adopted by clinics uses staining to reveal tissue architecture within thin sections. A trained pathologist then conducts a visual examination of these slices and, since the investigation is based on an empirical assessment, a certain amount of subjectivity is unavoidable. Furthermore, the reliance on external contrast agents such as hematoxylin and eosin (H&E), albeit being well-established methods, makes it difficult to standardize color balance, staining strength, and imaging conditions, hindering automated computational analysis. In response to these challenges, we applied spatial light interference microscopy (SLIM), a label-free method that generates contrast based on intrinsic tissue refractive index signatures. Thus, we reduce human bias and make imaging data comparable across instruments and clinics. We applied a mask R-CNN deep learning algorithm to the SLIM data to achieve an automated colorectal cancer screening procedure, i.e., classifying normal vs. cancerous specimens. Our results, obtained on a tissue microarray consisting of specimens from 132 patients, resulted in 91% accuracy for gland detection, 99.71% accuracy in gland-level classification, and 97% accuracy in core-level classification. A SLIM tissue scanner accompanied by an application-specific deep learning algorithm may become a valuable clinical tool, enabling faster and more accurate assessments by pathologists.

摘要

目前临床采用的外科病理学工作流程使用染色来显示薄片组织中的结构。然后,经过训练的病理学家对这些切片进行视觉检查,由于调查是基于经验评估的,因此不可避免地存在一定的主观性。此外,尽管依赖于苏木精和伊红(H&E)等外部对比剂是成熟的方法,但很难标准化颜色平衡、染色强度和成像条件,从而阻碍了自动化计算分析。为了应对这些挑战,我们应用了无标记的空间光干涉显微镜(SLIM),这是一种基于组织固有折射率特征产生对比度的方法。因此,我们减少了人为偏见,并使成像数据在仪器和临床之间具有可比性。我们将掩模 R-CNN 深度学习算法应用于 SLIM 数据,以实现自动结直肠癌筛查程序,即对正常与癌变样本进行分类。我们在由 132 名患者的样本组成的组织微阵列上获得的结果表明,在腺检测方面的准确率为 91%,在腺级分类方面的准确率为 99.71%,在核心级分类方面的准确率为 97%。配备特定于应用的深度学习算法的 SLIM 组织扫描仪可能成为有价值的临床工具,使病理学家能够更快、更准确地进行评估。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/87aa/8870406/096c7cbdc4f4/cells-11-00716-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验