Hu Keli, Wang Chen, Zhu Hancan, Zhao Liping, Fu Chao, Yang Weijun, Pan Wensheng
Department of Gastroenterology, Cancer Center, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, 310014, People's Republic of China.
Department of Computer Science and Engineering, Shaoxing University, Shaoxing, 312000, People's Republic of China.
Sci Rep. 2025 Sep 25;15(1):32777. doi: 10.1038/s41598-025-15470-2.
Colonoscopy is the gold standard for the examination and detection of polyps, with over 90% of polyps potentially progressing into colorectal cancer. Accurate polyp segmentation plays a pivotal role in the early diagnosis and treatment of colorectal cancer. Most existing methods ignore inconsistent colors of colonoscopy images and fuse features of different layers by directly using element-wise addition or concatenation. The former operation may lead to overfitting, while the latter can weaken the complementarity between different levels. In order to resolve these challenges, we propose a deep adjacent-differential network with shallow attention for polyp segmentation (ADSANet). Initially, we develop a color exchange strategy based on uncorrelated and specular region suppression to decouple image content from colors. This strategy allows the model to prioritize the target appearance, reducing the risk of overfitting to color features. To maximize the synergy between layers, we first propose an adjacent-differential feature fusion module (ADFM) and then employ the shallow attention module (SAM) for further feature fusion. Specifically, the ADFM generates differential features between adjacent layers and combines features at the corresponding level in the encoder, as well as the adjacent decoder features. We sequentially apply the ADFM at different scale levels for feature decoding, and the final prediction is computed by fusing the outputs of the sequentially connected ADFMs and the shallow attention module. Extensive experiments on five datasets show ADSANet outperforms most state-of-the-art convolutional neural networks (CNN)-based methods, ADSANet yields significant gains of 18.5%, 3.5%, 3.8%, 4.0%, and 1.7% over classical method PraNet on ETIS, ClinicDB, Endoscene, ColonDB, and Kvasir-SEG respectively, which demonstrates the effectiveness of the proposed scheme of color exchange and adjacent-differential feature fusion for more accurate polyp segmentation.
结肠镜检查是息肉检查和检测的金标准,超过90%的息肉有可能发展为结直肠癌。准确的息肉分割在结直肠癌的早期诊断和治疗中起着关键作用。大多数现有方法忽略了结肠镜检查图像颜色的不一致性,通过直接使用逐元素相加或拼接来融合不同层的特征。前一种操作可能导致过拟合,而后一种操作会削弱不同层次之间的互补性。为了解决这些挑战,我们提出了一种用于息肉分割的具有浅层注意力的深度邻域差分网络(ADSANet)。首先,我们基于不相关和镜面反射区域抑制开发了一种颜色交换策略,以将图像内容与颜色解耦。该策略使模型能够优先考虑目标外观,降低过拟合颜色特征的风险。为了最大化各层之间的协同作用,我们首先提出了一个邻域差分特征融合模块(ADFM),然后采用浅层注意力模块(SAM)进行进一步的特征融合。具体来说,ADFM在相邻层之间生成差分特征,并将编码器中相应层级的特征以及相邻的解码器特征进行组合。我们在不同尺度层级上依次应用ADFM进行特征解码,最终预测通过融合顺序连接的ADFM和浅层注意力模块的输出进行计算。在五个数据集上进行的大量实验表明,ADSANet优于大多数基于卷积神经网络(CNN)的最先进方法,在ETIS、ClinicDB、Endoscene、ColonDB和Kvasir-SEG数据集上,ADSANet分别比经典方法PraNet显著提高了18.5%、3.5%、3.8%、4.0%和1.7%,这证明了所提出的颜色交换和邻域差分特征融合方案对于更准确的息肉分割的有效性。