Suppr超能文献

SGNet:一种用于皮肤病变分割的具有双域边界增强和语义融合的结构引导网络

SGNet: A Structure-Guided Network with Dual-Domain Boundary Enhancement and Semantic Fusion for Skin Lesion Segmentation.

作者信息

Yun Haijiao, Du Qingyu, Han Ziqing, Li Mingjing, Yang Le, Liu Xinyang, Wang Chao, Ma Weitian

机构信息

School of Electronic Information Engineering, Changchun University, Changchun 130022, China.

School of the Graduate, Changchun University, Changchun 130022, China.

出版信息

Sensors (Basel). 2025 Jul 27;25(15):4652. doi: 10.3390/s25154652.

Abstract

Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based on UNet or Transformer architectures, often face limitations in regard to fully exploiting lesion features and incur high computational costs, compromising precise lesion delineation. To overcome these challenges, we propose SGNet, a structure-guided network, integrating a hybrid CNN-Mamba framework for robust skin lesion segmentation. The SGNet employs the Visual Mamba (VMamba) encoder to efficiently extract multi-scale features, followed by the Dual-Domain Boundary Enhancer (DDBE), which refines boundary representations and suppresses noise through spatial and frequency-domain processing. The Semantic-Texture Fusion Unit (STFU) adaptively integrates low-level texture with high-level semantic features, while the Structure-Aware Guidance Module (SAGM) generates coarse segmentation maps to provide global structural guidance. The Guided Multi-Scale Refiner (GMSR) further optimizes boundary details through a multi-scale semantic attention mechanism. Comprehensive experiments based on the ISIC2017, ISIC2018, and PH2 datasets demonstrate SGNet's superior performance, with average improvements of 3.30% in terms of the mean Intersection over Union (mIoU) value and 1.77% in regard to the Dice Similarity Coefficient (DSC) compared to state-of-the-art methods. Ablation studies confirm the effectiveness of each component, highlighting SGNet's exceptional accuracy and robust generalization for computer-aided dermatological diagnosis.

摘要

皮肤镜图像中皮肤病变的分割对于皮肤癌尤其是恶性黑色素瘤的准确诊断至关重要,但不规则的病变形状、模糊的边界、低对比度以及毛发干扰等伪影阻碍了分割。传统的深度学习方法通常基于UNet或Transformer架构,在充分利用病变特征方面往往面临局限性,并且计算成本高昂,影响了病变的精确勾勒。为了克服这些挑战,我们提出了SGNet,一种结构引导网络,集成了混合CNN-Mamba框架用于稳健的皮肤病变分割。SGNet采用视觉Mamba(VMamba)编码器高效提取多尺度特征,随后是双域边界增强器(DDBE),它通过空间和频域处理来细化边界表示并抑制噪声。语义纹理融合单元(STFU)将低级纹理与高级语义特征自适应地集成,而结构感知引导模块(SAGM)生成粗略分割图以提供全局结构引导。引导多尺度细化器(GMSR)通过多尺度语义注意力机制进一步优化边界细节。基于ISIC2017、ISIC2018和PH2数据集的综合实验表明,与现有方法相比,SGNet具有卓越的性能,平均交并比(mIoU)值提高了3.30%,骰子相似系数(DSC)提高了1.77%。消融研究证实了每个组件的有效性,突出了SGNet在计算机辅助皮肤病诊断方面的卓越准确性和稳健泛化能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8c40/12349126/8d1dfd7e2c77/sensors-25-04652-g001.jpg

相似文献

2
Lesion boundary detection for skin lesion segmentation based on boundary sensing and CNN-transformer fusion networks.
Artif Intell Med. 2025 Sep;167:103190. doi: 10.1016/j.artmed.2025.103190. Epub 2025 Jun 4.
3
BDFormer: Boundary-aware dual-decoder transformer for skin lesion segmentation.
Artif Intell Med. 2025 Apr;162:103079. doi: 10.1016/j.artmed.2025.103079. Epub 2025 Feb 15.
4
ETU-Net: edge enhancement-guided U-Net with transformer for skin lesion segmentation.
Phys Med Biol. 2023 Dec 22;69(1). doi: 10.1088/1361-6560/ad13d2.
6
Automatic melanoma detection using an optimized five-stream convolutional neural network.
Sci Rep. 2025 Jul 1;15(1):22404. doi: 10.1038/s41598-025-05675-w.
8
GatedSegDiff: a gated fusion diffusion model for skin lesion segmentation.
Med Biol Eng Comput. 2025 Sep;63(9):2637-2650. doi: 10.1007/s11517-025-03337-7. Epub 2025 Mar 18.
9
VMKLA-UNet: vision Mamba with KAN linear attention U-Net.
Sci Rep. 2025 Apr 17;15(1):13258. doi: 10.1038/s41598-025-97397-2.

本文引用的文献

1
Cancer statistics, 2025.
CA Cancer J Clin. 2025 Jan-Feb;75(1):10-45. doi: 10.3322/caac.21871. Epub 2025 Jan 16.
2
TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers.
Med Image Anal. 2024 Oct;97:103280. doi: 10.1016/j.media.2024.103280. Epub 2024 Jul 22.
3
MSF-Net: A Lightweight Multi-Scale Feature Fusion Network for Skin Lesion Segmentation.
Biomedicines. 2023 Jun 16;11(6):1733. doi: 10.3390/biomedicines11061733.
4
MFEFNet: Multi-scale feature enhancement and Fusion Network for polyp segmentation.
Comput Biol Med. 2023 May;157:106735. doi: 10.1016/j.compbiomed.2023.106735. Epub 2023 Mar 2.
6
FAT-Net: Feature adaptive transformers for automated skin lesion segmentation.
Med Image Anal. 2022 Feb;76:102327. doi: 10.1016/j.media.2021.102327. Epub 2021 Dec 4.
7
Sharp U-Net: Depthwise convolutional network for biomedical image segmentation.
Comput Biol Med. 2021 Sep;136:104699. doi: 10.1016/j.compbiomed.2021.104699. Epub 2021 Jul 29.
8
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation.
Nat Methods. 2021 Feb;18(2):203-211. doi: 10.1038/s41592-020-01008-z. Epub 2020 Dec 7.
9
CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation.
IEEE Trans Med Imaging. 2021 Feb;40(2):699-711. doi: 10.1109/TMI.2020.3035253. Epub 2021 Feb 2.
10
What is AI? Applications of artificial intelligence to dermatology.
Br J Dermatol. 2020 Sep;183(3):423-430. doi: 10.1111/bjd.18880. Epub 2020 Mar 29.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验