Department of Electrical and Computer Engineering, University of Alabama, Alabama, 35401, Tuscaloosa, USA.
Department of Informatics, College of Computing , New Jersey Institute of Technology, Newark, 07103, New Jersey, USA.
J Digit Imaging. 2021 Apr;34(2):404-417. doi: 10.1007/s10278-021-00428-3. Epub 2021 Mar 16.
The objective of this paper was to develop a computer-aided diagnostic (CAD) tools for automated analysis of capsule endoscopic (CE) images, more precisely, detect small intestinal abnormalities like bleeding.
In particular, we explore a convolutional neural network (CNN)-based deep learning framework to identify bleeding and non-bleeding CE images, where a pre-trained AlexNet neural network is used to train a transfer learning CNN that carries out the identification. Moreover, bleeding zones in a bleeding-identified image are also delineated using deep learning-based semantic segmentation that leverages a SegNet deep neural network.
To evaluate the performance of the proposed framework, we carry out experiments on two publicly available clinical datasets and achieve a 98.49% and 88.39% F1 score, respectively, on the capsule endoscopy.org and KID datasets. For bleeding zone identification, 94.42% global accuracy and 90.69% weighted intersection over union (IoU) are achieved.
Finally, our performance results are compared to other recently developed state-of-the-art methods, and consistent performance advances are demonstrated in terms of performance measures for bleeding image and bleeding zone detection. Relative to the present and established practice of manual inspection and annotation of CE images by a physician, our framework enables considerable annotation time and human labor savings in bleeding detection in CE images, while providing the additional benefits of bleeding zone delineation and increased detection accuracy. Moreover, the overall cost of CE enabled by our framework will also be much lower due to the reduction of manual labor, which can make CE affordable for a larger population.
本文旨在开发一种计算机辅助诊断(CAD)工具,用于自动分析胶囊内镜(CE)图像,更具体地说,检测出血等小肠异常。
特别是,我们探索了一种基于卷积神经网络(CNN)的深度学习框架,用于识别出血和非出血的 CE 图像,其中使用预先训练的 AlexNet 神经网络来训练执行识别的迁移学习 CNN。此外,还使用基于深度学习的语义分割来划定识别为出血的图像中的出血区域,该分割利用 SegNet 深度神经网络。
为了评估所提出框架的性能,我们在两个公开的临床数据集上进行了实验,在 capsule endoscopy.org 和 KID 数据集上分别获得了 98.49%和 88.39%的 F1 分数。对于出血区域识别,获得了 94.42%的全局准确率和 90.69%的加权交并比(IoU)。
最后,将我们的性能结果与其他最近开发的最先进方法进行了比较,在出血图像和出血区域检测的性能指标方面,展示了一致的性能提升。与由医生手动检查和注释 CE 图像的现有实践相比,我们的框架在 CE 图像的出血检测中可以节省大量的注释时间和人力,同时提供出血区域划定和提高检测准确性的额外好处。此外,由于减少了人工劳动,我们的框架所实现的 CE 的总成本也将大大降低,这使得更多的人能够负担得起 CE。