Gelu-Simeon Moana, Mamou Adel, Saint-Georges Georgette, Alexis Marceline, Sautereau Marie, Mamou Yassine, Simeon Jimmy
Service d'Hépato-Gastroentérologie, CHU de la Guadeloupe, Pointe- à-Pitre, F-97100, France.
Univ Antilles, Univ. Rennes, INSERM, EHESP, IRSET (Institut de Recherche en Santé, Environnement et Travail) - UMR_S 1085, Pointe-à-Pitre, F-97100, France.
BMC Med Inform Decis Mak. 2025 Jun 4;25(1):206. doi: 10.1186/s12911-025-03047-y.
BACKGROUND: Deep learning models have shown considerable potential to improve diagnostic accuracy across medical fields. Although YOLACT has demonstrated real-time detection and segmentation in non-medical datasets, its application in medical settings remains underexplored. This study evaluated the performance of a YOLACT-derived Real-time Polyp Delineation Model (RTPoDeMo) for real-time use on prospectively recorded colonoscopy videos. METHODS: Twelve combinations of architectures, including Mask-RCNN, YOLACT, and YOLACT++, paired with backbones such as ResNet50, ResNet101, and DarkNet53, were tested on 2,188 colonoscopy images with three image resolution sizes. Dataset preparation involved pre-processing and segmentation annotation, with optimized image augmentation. RESULTS: RTPoDeMo, using YOLACT-ResNet50, achieved 72.3 mAP and 32.8 FPS for real-time instance segmentation based on COCO annotations. The model performed with a per-image accuracy of 99.59% (95% CI: [99.45 - 99.71%]), sensitivity of 90.63% (95% CI: [78.95 - 93.64%]), specificity of 99.95% (95% CI: [99.93 - 99.97%]) and a F1-score of 0.94 (95% CI: [0.87-0.98]). In validation, out of 36 polyps detected by experts, RTPoDeMo missed only one polyp, compared to six missed by senior endoscopists. The model demonstrated good agreement with experts, reflected by a Cohen's Kappa coefficient of 0.72 (95% CI: [0.54-1.00], p < 0.0001). CONCLUSIONS: Our model provides new perspectives in the adaptation of YOLACT to the real-time delineation of colorectal polyps. In the future, it could improve the characterization of polyps to be resected during colonoscopy.
背景:深度学习模型在提高各医学领域的诊断准确性方面显示出了巨大潜力。尽管YOLACT已在非医学数据集中证明了实时检测和分割能力,但其在医学环境中的应用仍有待深入探索。本研究评估了一种源自YOLACT的实时息肉描绘模型(RTPoDeMo)在对前瞻性记录的结肠镜检查视频进行实时应用时的性能。 方法:测试了包括Mask-RCNN、YOLACT和YOLACT++在内的12种架构组合,并与ResNet50、ResNet101和DarkNet53等骨干网络配对,在2188张具有三种图像分辨率大小的结肠镜检查图像上进行测试。数据集准备包括预处理和分割注释,并进行了优化的图像增强。 结果:基于COCO注释,使用YOLACT-ResNet50的RTPoDeMo在实时实例分割方面实现了72.3 mAP和32.8 FPS。该模型的单图像准确率为99.59%(95%置信区间:[99.45 - 99.71%]),灵敏度为90.63%(95%置信区间:[78.95 - 93.64%]),特异性为99.95%(95%置信区间:[99.93 - 99.97%]),F1分数为0.94(95%置信区间:[0.87 - 0.98])。在验证中,在专家检测出的36个息肉中,RTPoDeMo仅漏检了1个息肉,而高级内镜医师漏检了6个。该模型与专家表现出良好的一致性,Cohen's Kappa系数为0.72(95%置信区间:[0.54 - 1.00],p < 0.0001)。 结论:我们的模型为将YOLACT应用于大肠息肉的实时描绘提供了新的视角。未来,它可能会改善结肠镜检查期间待切除息肉的特征描述。
BMC Med Inform Decis Mak. 2025-6-4
World J Gastroenterol. 2021-8-21
Nat Biomed Eng. 2018-10-10
Gastroenterology. 2018-6-18
BMC Med Imaging. 2020-7-22
Comput Math Methods Med. 2020
Diagnostics (Basel). 2024-2-22
Clin Gastroenterol Hepatol. 2023-2
NPJ Digit Med. 2022-6-30
Lancet Digit Health. 2022-6
Gastroenterology. 2022-7
Annu Int Conf IEEE Eng Med Biol Soc. 2021-11