Zhang Ruikai, Zheng Yali, Mak Tony Wing Chung, Yu Ruoxi, Wong Sunny H, Lau James Y W, Poon Carmen C Y
IEEE J Biomed Health Inform. 2017 Jan;21(1):41-47. doi: 10.1109/JBHI.2016.2635662. Epub 2016 Dec 5.
Colorectal cancer (CRC) is a leading cause of cancer deaths worldwide. Although polypectomy at early stage reduces CRC incidence, 90% of the polyps are small and diminutive, where removal of them poses risks to patients that may outweigh the benefits. Correctly detecting and predicting polyp type during colonoscopy allows endoscopists to resect and discard the tissue without submitting it for histology, saving time, and costs. Nevertheless, human visual observation of early stage polyps varies. Therefore, this paper aims at developing a fully automatic algorithm to detect and classify hyperplastic and adenomatous colorectal polyps. Adenomatous polyps should be removed, whereas distal diminutive hyperplastic polyps are considered clinically insignificant and may be left in situ . A novel transfer learning application is proposed utilizing features learned from big nonmedical datasets with 1.4-2.5 million images using deep convolutional neural network. The endoscopic images we collected for experiment were taken under random lighting conditions, zooming and optical magnification, including 1104 endoscopic nonpolyp images taken under both white-light and narrowband imaging (NBI) endoscopy and 826 NBI endoscopic polyp images, of which 263 images were hyperplasia and 563 were adenoma as confirmed by histology. The proposed method identified polyp images from nonpolyp images in the beginning followed by predicting the polyp histology. When compared with visual inspection by endoscopists, the results of this study show that the proposed method has similar precision (87.3% versus 86.4%) but a higher recall rate (87.6% versus 77.0%) and a higher accuracy (85.9% versus 74.3%). In conclusion, automatic algorithms can assist endoscopists in identifying polyps that are adenomatous but have been incorrectly judged as hyperplasia and, therefore, enable timely resection of these polyps at an early stage before they develop into invasive cancer.
结直肠癌(CRC)是全球癌症死亡的主要原因之一。尽管早期进行息肉切除术可降低结直肠癌的发病率,但90%的息肉体积小且微小,切除这些息肉给患者带来的风险可能超过益处。在结肠镜检查过程中正确检测和预测息肉类型,可使内镜医师切除并丢弃组织而无需进行组织学检查,从而节省时间和成本。然而,早期息肉的人工视觉观察存在差异。因此,本文旨在开发一种全自动算法,用于检测和分类增生性和腺瘤性结直肠息肉。腺瘤性息肉应予以切除,而远端微小增生性息肉在临床上被认为无显著意义,可原位保留。本文提出了一种新颖的迁移学习应用,利用深度卷积神经网络从包含140万至250万张图像的大型非医学数据集中学习到的特征。我们收集用于实验的内镜图像是在随机光照条件、变焦和光学放大下拍摄的,包括1104张在白光和窄带成像(NBI)内镜下拍摄的非息肉内镜图像以及826张NBI内镜下的息肉图像,其中经组织学证实263张为增生性息肉,563张为腺瘤性息肉。所提出的方法首先从非息肉图像中识别出息肉图像,然后预测息肉的组织学类型。与内镜医师的视觉检查相比,本研究结果表明,所提出的方法具有相似的精度(87.3%对86.4%),但召回率更高(87.6%对77.0%),准确率也更高(85.9%对74.3%)。总之,自动算法可协助内镜医师识别那些被错误判断为增生性但实际为腺瘤性的息肉,从而能够在这些息肉发展为浸润性癌之前的早期阶段及时进行切除。