Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:2177-2180. doi: 10.1109/EMBC48229.2022.9871040.
This study aimed to build convolutional neural network (CNN) models capable of classifying upper endoscopy images, to determine the stage of infection in the development of a gastric cancer. Two different problems were covered. A first one with a smaller number of categorical classes and a lower degree of detail. A second one, consisting of a larger number of classes, corresponding to each stage of precancerous conditions in the Correa's cascade. Three public datasets were used to build the dataset that served as input for the classification tasks. The CNN models built for this study are capable of identifying the stage of precancerous conditions/lesions in the moment of an upper endoscopy. A model based on the DenseNet169 architecture achieved an average accuracy of 0.72 in discriminating among the different stages of infection. The trade-off between detail in the definition of lesion classes and classification performance has been explored. Results from the application of Grad CAMs to the trained models show that the proposed CNN architectures base their classification output on the extraction of physiologically relevant image features. Clinical relevance- This research could improve the accuracy of upper endoscopy exams, which have margin for improvement, by assisting doctors when analysing the lesions seen in patient's images.
本研究旨在构建能够对胃镜图像进行分类的卷积神经网络(CNN)模型,以确定胃癌发展过程中感染的阶段。涵盖了两个不同的问题。一个问题的类别较少,细节程度较低。另一个问题则包含更多的类别,对应于 Correa 级联中每个癌前病变阶段。使用了三个公共数据集来构建用于分类任务的输入数据集。为这项研究构建的 CNN 模型能够在胃镜检查时识别癌前病变/损伤的阶段。基于 DenseNet169 架构的模型在区分不同感染阶段方面的平均准确率为 0.72。已经探索了病变类别定义的细节和分类性能之间的权衡。对训练后的模型应用 Grad CAMs 的结果表明,所提出的 CNN 架构基于对生理相关图像特征的提取来确定分类输出。临床相关性-这项研究可以通过在分析患者图像中看到的病变时为医生提供帮助,从而提高胃镜检查的准确性,而胃镜检查仍有改进的空间。