Biffi Carlo, Roffo Giorgio, Salvagnini Pietro, Cherubini Andrea
Cosmo Intelligent Medical Devices, Dublin, Ireland.
Cosmo Intelligent Medical Devices, Dublin, Ireland.
Comput Methods Programs Biomed. 2025 Oct;270:108782. doi: 10.1016/j.cmpb.2025.108782. Epub 2025 Jul 3.
Following recent advancements in computer-aided detection and diagnosis systems for colonoscopy, the automated reporting of colonoscopy procedures is set to further revolutionize clinical practice. A crucial yet underexplored aspect in the development of these systems is the creation of computer vision models capable of autonomously segmenting full-procedure colonoscopy videos into anatomical sections and procedural phases. In this work, we aim to create the first open-access dataset for this task and propose a state-of-the-art approach, benchmarked against competitive models.
We annotated the publicly available REAL-Colon dataset, consisting of 2.7 million frames from 60 complete colonoscopy videos, with frame-level labels for anatomical locations and colonoscopy phases across nine categories. We then present ColonTCN, a learning-based architecture that employs custom temporal convolutional blocks designed to efficiently capture long temporal dependencies for the temporal segmentation of colonoscopy videos. We also propose a dual k-fold cross-validation evaluation protocol for this benchmark, which includes model assessment on unseen, multi-center data.
ColonTCN achieves state-of-the-art performance in classification accuracy while maintaining a low parameter count when evaluated using the two proposed k-fold cross-validation settings, outperforming competitive models. We report ablation studies to provide insights into the challenges of this task and highlight the benefits of the custom temporal convolutional blocks, which enhance learning and improve model efficiency.
We believe that the proposed open-access benchmark and the ColonTCN approach represent a significant advancement in the temporal segmentation of colonoscopy procedures, fostering further open-access research to address this clinical need. Code and data are available at: https://github.com/cosmoimd/temporal_segmentation.
随着结肠镜检查的计算机辅助检测与诊断系统近期取得进展,结肠镜检查程序的自动报告将进一步彻底改变临床实践。这些系统开发中一个关键但未得到充分探索的方面是创建能够将全流程结肠镜检查视频自动分割为解剖部分和操作阶段的计算机视觉模型。在这项工作中,我们旨在为此任务创建首个开放获取数据集,并提出一种先进方法,与竞争模型进行基准测试。
我们对公开可用的REAL - Colon数据集进行注释,该数据集由来自60个完整结肠镜检查视频的270万帧组成,带有针对九个类别的解剖位置和结肠镜检查阶段的帧级标签。然后我们提出了ColonTCN,这是一种基于学习的架构,采用定制的时间卷积块,旨在有效捕捉长时时间依赖性以用于结肠镜检查视频的时间分割。我们还为此基准测试提出了一种双重k折交叉验证评估协议,其中包括对未见的多中心数据进行模型评估。
当使用所提出的两种k折交叉验证设置进行评估时,ColonTCN在分类准确率方面达到了先进水平,同时保持了较低的参数数量,优于竞争模型。我们报告了消融研究,以深入了解此任务的挑战,并突出定制时间卷积块的优势,这些块增强了学习并提高了模型效率。
我们相信所提出的开放获取基准和ColonTCN方法代表了结肠镜检查程序时间分割方面的重大进展,促进了进一步的开放获取研究以满足这一临床需求。代码和数据可在以下网址获取:https://github.com/cosmoimd/temporal_segmentation 。