Suppr超能文献

将突出扩散模型用作息肉分割的插件先验。

Highlighted Diffusion Model as Plug-In Priors for Polyp Segmentation.

作者信息

Du Yuhao, Jiang Yuncheng, Tan Shuangyi, Liu Si-Qi, Li Zhen, Li Guanbin, Wan Xiang

出版信息

IEEE J Biomed Health Inform. 2025 Feb;29(2):1209-1220. doi: 10.1109/JBHI.2024.3485767. Epub 2025 Feb 10.

Abstract

Automated polyp segmentation from colonoscopy images is crucial for colorectal cancer diagnosis. The accuracy of such segmentation, however, is challenged by two main factors. First, the variability in polyps' size, shape, and color, coupled with the scarcity of well-annotated data due to the need for specialized manual annotation, hampers the efficacy of existing deep learning methods. Second, concealed polyps often blend with adjacent intestinal tissues, leading to poor contrast that challenges segmentation models. Recently, diffusion models have been explored and adapted for polyp segmentation tasks. However, the significant domain gap between RGB-colonoscopy images and grayscale segmentation masks, along with the low efficiency of the diffusion generation process, hinders the practical implementation of these models. To mitigate these challenges, we introduce the Highlighted Diffusion Model Plus (HDM+), a two-stage polyp segmentation framework. This framework incorporates the Highlighted Diffusion Model (HDM) to provide explicit semantic guidance, thereby enhancing segmentation accuracy. In the initial stage, the HDM is trained using highlighted ground-truth data, which emphasizes polyp regions while suppressing the background in the images. This approach reduces the domain gap by focusing on the image itself rather than on the segmentation mask. In the subsequent second stage, we employ the highlighted features from the trained HDM's U-Net model as plug-in priors for polyp segmentation, rather than generating highlighted images, thereby increasing efficiency. Extensive experiments conducted on six polyp segmentation benchmarks demonstrate the effectiveness of our approach.

摘要

从结肠镜检查图像中自动进行息肉分割对于结直肠癌的诊断至关重要。然而,这种分割的准确性受到两个主要因素的挑战。首先,息肉的大小、形状和颜色存在差异,再加上由于需要专门的手动标注而导致标注良好的数据稀缺,这妨碍了现有深度学习方法的有效性。其次,隐匿性息肉通常与相邻的肠道组织融合,导致对比度差,这对分割模型构成了挑战。最近,扩散模型已被探索并应用于息肉分割任务。然而,RGB 结肠镜检查图像与灰度分割掩码之间存在显著的域差距,以及扩散生成过程的低效率,阻碍了这些模型的实际应用。为了缓解这些挑战,我们引入了突出扩散模型升级版(HDM+),这是一个两阶段的息肉分割框架。该框架结合了突出扩散模型(HDM)以提供明确的语义指导,从而提高分割准确性。在初始阶段,使用突出的真实数据训练 HDM,该数据强调息肉区域同时抑制图像中的背景。这种方法通过关注图像本身而不是分割掩码来减少域差距。在随后的第二阶段,我们将训练好的 HDM 的 U-Net 模型中的突出特征用作息肉分割的插件先验,而不是生成突出图像,从而提高效率。在六个息肉分割基准上进行的大量实验证明了我们方法的有效性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验