利用深度神经网络在智能手机拍摄的数字组织病理学图像上识别基底细胞癌。

Recognizing basal cell carcinoma on smartphone-captured digital histopathology images with a deep neural network.

作者信息

Jiang Y Q, Xiong J H, Li H Y, Yang X H, Yu W T, Gao M, Zhao X, Ma Y P, Zhang W, Guan Y F, Gu H, Sun J F

机构信息

Department of Dermatopathology, Institute of Dermatology, Peking Union Medical College & Chinese Academy of Medical Sciences, Nanjing, 210042, China.

Beijing Tulip Partners Technology Co., Ltd, Beijing, China.

出版信息

Br J Dermatol. 2020 Mar;182(3):754-762. doi: 10.1111/bjd.18026. Epub 2019 Aug 22.

Abstract

BACKGROUND

Pioneering effort has been made to facilitate the recognition of pathology in malignancies based on whole-slide images (WSIs) through deep learning approaches. It remains unclear whether we can accurately detect and locate basal cell carcinoma (BCC) using smartphone-captured images.

OBJECTIVES

To develop deep neural network frameworks for accurate BCC recognition and segmentation based on smartphone-captured microscopic ocular images (MOIs).

METHODS

We collected a total of 8046 MOIs, 6610 of which had binary classification labels and the other 1436 had pixelwise annotations. Meanwhile, 128 WSIs were collected for comparison. Two deep learning frameworks were created. The 'cascade' framework had a classification model for identifying hard cases (images with low prediction confidence) and a segmentation model for further in-depth analysis of the hard cases. The 'segmentation' framework directly segmented and classified all images. Sensitivity, specificity and area under the curve (AUC) were used to evaluate the overall performance of BCC recognition.

RESULTS

The MOI- and WSI-based models achieved comparable AUCs around 0·95. The 'cascade' framework achieved 0·93 sensitivity and 0·91 specificity. The 'segmentation' framework was more accurate but required more computational resources, achieving 0·97 sensitivity, 0·94 specificity and 0·987 AUC. The runtime of the 'segmentation' framework was 15·3 ± 3·9 s per image, whereas the 'cascade' framework took 4·1 ± 1·4 s. Additionally, the 'segmentation' framework achieved 0·863 mean intersection over union.

CONCLUSIONS

Based on the accessible MOIs via smartphone photography, we developed two deep learning frameworks for recognizing BCC pathology with high sensitivity and specificity. This work opens a new avenue for automatic BCC diagnosis in different clinical scenarios. What's already known about this topic? The diagnosis of basal cell carcinoma (BCC) is labour intensive due to the large number of images to be examined, especially when consecutive slide reading is needed in Mohs surgery. Deep learning approaches have demonstrated promising results on pathological image-related diagnostic tasks. Previous studies have focused on whole-slide images (WSIs) and leveraged classification on image patches for detecting and localizing breast cancer metastases. What does this study add? Instead of WSIs, microscopic ocular images (MOIs) photographed from microscope eyepieces using smartphone cameras were used to develop neural network models for recognizing BCC automatically. The MOI- and WSI-based models achieved comparable areas under the curve around 0·95. Two deep learning frameworks for recognizing BCC pathology were developed with high sensitivity and specificity. Recognizing BCC through a smartphone could be considered a future clinical choice.

摘要

背景

人们已做出开创性努力,通过深度学习方法促进基于全切片图像(WSIs)对恶性肿瘤病理学的识别。目前尚不清楚我们能否使用智能手机拍摄的图像准确检测和定位基底细胞癌(BCC)。

目的

开发深度神经网络框架,以基于智能手机拍摄的微观眼部图像(MOIs)准确识别和分割BCC。

方法

我们总共收集了8046张MOIs,其中6610张有二元分类标签,另外1436张有逐像素注释。同时,收集了128张WSIs用于比较。创建了两个深度学习框架。“级联”框架有一个用于识别疑难病例(预测置信度低的图像)的分类模型和一个用于对疑难病例进行进一步深入分析的分割模型。“分割”框架直接对所有图像进行分割和分类。敏感性、特异性和曲线下面积(AUC)用于评估BCC识别的整体性能。

结果

基于MOI和基于WSI的模型在0.95左右取得了相当的AUC。“级联”框架实现了0.93的敏感性和0.91的特异性。“分割”框架更准确,但需要更多计算资源,实现了0.97的敏感性、0.94的特异性和0.987的AUC。“分割”框架的运行时间为每张图像15.3±3.9秒,而“级联”框架需要4.1±1.4秒。此外,“分割”框架的平均交并比为0.863。

结论

基于通过智能手机拍摄可获取的MOIs,我们开发了两个深度学习框架,以高敏感性和特异性识别BCC病理学。这项工作为不同临床场景中的BCC自动诊断开辟了一条新途径。关于这个主题已知的信息有哪些?基底细胞癌(BCC)的诊断由于需要检查大量图像而劳动强度大,尤其是在莫氏手术中需要连续读片时。深度学习方法在与病理图像相关的诊断任务中已显示出有前景的结果。先前的研究集中在全切片图像(WSIs)上,并利用图像块分类来检测和定位乳腺癌转移灶。这项研究增加了什么?本研究使用智能手机相机从显微镜目镜拍摄的微观眼部图像(MOIs),而非WSIs,来开发用于自动识别BCC的神经网络模型。基于MOI和基于WSI的模型在0.95左右取得了相当的曲线下面积。开发了两个用于识别BCC病理学的深度学习框架,具有高敏感性和特异性。通过智能手机识别BCC可被视为未来的临床选择。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索