Yun Haijiao, Du Qingyu, Han Ziqing, Li Mingjing, Yang Le, Liu Xinyang, Wang Chao, Ma Weitian
School of Electronic Information Engineering, Changchun University, Changchun 130022, China.
School of the Graduate, Changchun University, Changchun 130022, China.
Sensors (Basel). 2025 Jul 27;25(15):4652. doi: 10.3390/s25154652.
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based on UNet or Transformer architectures, often face limitations in regard to fully exploiting lesion features and incur high computational costs, compromising precise lesion delineation. To overcome these challenges, we propose SGNet, a structure-guided network, integrating a hybrid CNN-Mamba framework for robust skin lesion segmentation. The SGNet employs the Visual Mamba (VMamba) encoder to efficiently extract multi-scale features, followed by the Dual-Domain Boundary Enhancer (DDBE), which refines boundary representations and suppresses noise through spatial and frequency-domain processing. The Semantic-Texture Fusion Unit (STFU) adaptively integrates low-level texture with high-level semantic features, while the Structure-Aware Guidance Module (SAGM) generates coarse segmentation maps to provide global structural guidance. The Guided Multi-Scale Refiner (GMSR) further optimizes boundary details through a multi-scale semantic attention mechanism. Comprehensive experiments based on the ISIC2017, ISIC2018, and PH2 datasets demonstrate SGNet's superior performance, with average improvements of 3.30% in terms of the mean Intersection over Union (mIoU) value and 1.77% in regard to the Dice Similarity Coefficient (DSC) compared to state-of-the-art methods. Ablation studies confirm the effectiveness of each component, highlighting SGNet's exceptional accuracy and robust generalization for computer-aided dermatological diagnosis.
皮肤镜图像中皮肤病变的分割对于皮肤癌尤其是恶性黑色素瘤的准确诊断至关重要,但不规则的病变形状、模糊的边界、低对比度以及毛发干扰等伪影阻碍了分割。传统的深度学习方法通常基于UNet或Transformer架构,在充分利用病变特征方面往往面临局限性,并且计算成本高昂,影响了病变的精确勾勒。为了克服这些挑战,我们提出了SGNet,一种结构引导网络,集成了混合CNN-Mamba框架用于稳健的皮肤病变分割。SGNet采用视觉Mamba(VMamba)编码器高效提取多尺度特征,随后是双域边界增强器(DDBE),它通过空间和频域处理来细化边界表示并抑制噪声。语义纹理融合单元(STFU)将低级纹理与高级语义特征自适应地集成,而结构感知引导模块(SAGM)生成粗略分割图以提供全局结构引导。引导多尺度细化器(GMSR)通过多尺度语义注意力机制进一步优化边界细节。基于ISIC2017、ISIC2018和PH2数据集的综合实验表明,与现有方法相比,SGNet具有卓越的性能,平均交并比(mIoU)值提高了3.30%,骰子相似系数(DSC)提高了1.77%。消融研究证实了每个组件的有效性,突出了SGNet在计算机辅助皮肤病诊断方面的卓越准确性和稳健泛化能力。