Suppr超能文献

通过从不平衡的大型数据集中进行子采样和残差学习实现稳定的息肉场景分类。

Stable polyp-scene classification via subsampling and residual learning from an imbalanced large dataset.

作者信息

Itoh Hayato, Roth Holger, Oda Masahiro, Misawa Masashi, Mori Yuichi, Kudo Shin-Ei, Mori Kensaku

机构信息

Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.

Digestive Disease Center, Showa University Northern Yokohama Hospital, Tsuzuki-ku, Yokohama, 224-8503, Japan.

出版信息

Healthc Technol Lett. 2019 Nov 26;6(6):237-242. doi: 10.1049/htl.2019.0079. eCollection 2019 Dec.

Abstract

This Letter presents a stable polyp-scene classification method with low false positive (FP) detection. Precise automated polyp detection during colonoscopies is essential for preventing colon-cancer deaths. There is, therefore, a demand for a computer-assisted diagnosis (CAD) system for colonoscopies to assist colonoscopists. A high-performance CAD system with spatiotemporal feature extraction via a three-dimensional convolutional neural network (3D CNN) with a limited dataset achieved about 80% detection accuracy in actual colonoscopic videos. Consequently, further improvement of a 3D CNN with larger training data is feasible. However, the ratio between polyp and non-polyp scenes is quite imbalanced in a large colonoscopic video dataset. This imbalance leads to unstable polyp detection. To circumvent this, the authors propose an efficient and balanced learning technique for deep residual learning. The authors' method randomly selects a subset of non-polyp scenes whose number is the same number of still images of polyp scenes at the beginning of each epoch of learning. Furthermore, they introduce post-processing for stable polyp-scene classification. This post-processing reduces the FPs that occur in the practical application of polyp-scene classification. They evaluate several residual networks with a large polyp-detection dataset consisting of 1027 colonoscopic videos. In the scene-level evaluation, their proposed method achieves stable polyp-scene classification with 0.86 sensitivity and 0.97 specificity.

摘要

本文提出了一种具有低误报(FP)检测的稳定息肉场景分类方法。结肠镜检查期间精确的自动息肉检测对于预防结肠癌死亡至关重要。因此,需要一种用于结肠镜检查的计算机辅助诊断(CAD)系统来协助结肠镜检查人员。一个通过三维卷积神经网络(3D CNN)进行时空特征提取的高性能CAD系统,在数据集有限的情况下,在实际结肠镜视频中实现了约80%的检测准确率。因此,使用更大的训练数据进一步改进3D CNN是可行的。然而,在大型结肠镜视频数据集中,息肉和非息肉场景之间的比例非常不均衡。这种不均衡导致息肉检测不稳定。为了规避这一问题,作者提出了一种用于深度残差学习的高效且平衡的学习技术。作者的方法在每个学习轮次开始时,随机选择一个非息肉场景的子集,其数量与息肉场景的静止图像数量相同。此外,他们引入了用于稳定息肉场景分类的后处理。这种后处理减少了息肉场景分类实际应用中出现的误报。他们使用一个由1027个结肠镜视频组成的大型息肉检测数据集评估了几个残差网络。在场景级评估中,他们提出的方法以0.86的灵敏度和0.97的特异性实现了稳定的息肉场景分类。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4b20/6952261/40310cfb94dc/HTL.2019.0079.01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验