Suppr超能文献

全切片图像聚焦质量:自动评估及其对人工智能癌症检测的影响

Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection.

作者信息

Kohlberger Timo, Liu Yun, Moran Melissa, Chen Po-Hsuan Cameron, Brown Trissia, Hipp Jason D, Mermel Craig H, Stumpe Martin C

机构信息

Google Health, Palo Alto, CA, USA.

Work done at Google Health via Advanced Clinical, Deerfield, IL, USA.

出版信息

J Pathol Inform. 2019 Dec 12;10:39. doi: 10.4103/jpi.jpi_11_19. eCollection 2019.

Abstract

BACKGROUND

Digital pathology enables remote access or consults and powerful image analysis algorithms. However, the slide digitization process can create artifacts such as out-of-focus (OOF). OOF is often only detected on careful review, potentially causing rescanning, and workflow delays. Although scan time operator screening for whole-slide OOF is feasible, manual screening for OOF affecting only parts of a slide is impractical.

METHODS

We developed a convolutional neural network (ConvFocus) to exhaustively localize and quantify the severity of OOF regions on digitized slides. ConvFocus was developed using our refined semi-synthetic OOF data generation process and evaluated using seven slides spanning three different tissue and three different stain types, each of which were digitized using two different whole-slide scanner models ConvFocus's predictions were compared with pathologist-annotated focus quality grades across 514 distinct regions representing 37,700 35 μm × 35 μm image patches, and 21 digitized "z-stack" WSIs that contain known OOF patterns.

RESULTS

When compared to pathologist-graded focus quality, ConvFocus achieved Spearman rank coefficients of 0.81 and 0.94 on two scanners and reproduced the expected OOF patterns from z-stack scanning. We also evaluated the impact of OOF on the accuracy of a state-of-the-art metastatic breast cancer detector and saw a consistent decrease in performance with increasing OOF.

CONCLUSIONS

Comprehensive whole-slide OOF categorization could enable rescans before pathologist review, potentially reducing the impact of digitization focus issues on the clinical workflow. We show that the algorithm trained on our semi-synthetic OOF data generalizes well to real OOF regions across tissue types, stains, and scanners. Finally, quantitative OOF maps can flag regions that might otherwise be misclassified by image analysis algorithms, preventing OOF-induced errors.

摘要

背景

数字病理学支持远程访问或会诊以及强大的图像分析算法。然而,玻片数字化过程可能会产生诸如失焦(OOF)等伪像。OOF通常只有在仔细检查时才能被检测到,这可能会导致重新扫描和工作流程延迟。虽然对全玻片进行OOF的扫描时间操作员筛查是可行的,但对仅影响玻片部分区域的OOF进行人工筛查是不切实际的。

方法

我们开发了一种卷积神经网络(ConvFocus),用于详尽地定位和量化数字化玻片上OOF区域的严重程度。ConvFocus是使用我们改进的半合成OOF数据生成过程开发的,并使用七张玻片进行评估,这些玻片涵盖三种不同组织和三种不同染色类型,每张玻片使用两种不同的全玻片扫描仪型号进行数字化。将ConvFocus的预测结果与病理学家标注的聚焦质量等级进行比较,这些等级来自代表37700个35μm×35μm图像块的514个不同区域,以及21个包含已知OOF模式的数字化“z-stack”全玻片图像(WSIs)。

结果

与病理学家分级的聚焦质量相比,ConvFocus在两台扫描仪上的Spearman等级系数分别为0.81和0.94,并从z-stack扫描中再现了预期的OOF模式。我们还评估了OOF对一种先进的转移性乳腺癌检测器准确性的影响,发现随着OOF的增加,性能持续下降。

结论

全面的全玻片OOF分类可以在病理学家检查之前进行重新扫描,有可能减少数字化聚焦问题对临床工作流程的影响。我们表明,在我们的半合成OOF数据上训练的算法能够很好地推广到跨组织类型、染色和扫描仪的真实OOF区域。最后,定量OOF图可以标记那些否则可能会被图像分析算法误分类的区域,防止由OOF引起的错误。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8bbe/6939343/6b13d1591c71/JPI-10-39-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验