• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于医学图像分割的对抗性类别自知识蒸馏

Adversarial class-wise self-knowledge distillation for medical image segmentation.

作者信息

Yu Xiangchun, Shen Jiaqing, Zhang Dingwen, Zheng Jian

机构信息

Jiangxi Provincial Key Laboratory of Multidimensional Intelligent Perception and Control, School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou, China.

出版信息

Sci Rep. 2025 Apr 17;15(1):13231. doi: 10.1038/s41598-025-98116-7.

DOI:10.1038/s41598-025-98116-7
PMID:40246971
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12006537/
Abstract

Self-knowledge distillation enables knowledge transfer by dynamically constructing the next-stage learning objectives, thus providing more effective path cues to optimize the compact student. The challenge lies in formulating effective learning objectives for the upcoming stage that mitigate the interference of inter-class similarity in medical image segmentation. This paper presents an Adversarial Class-Wise Self-Knowledge Distillation (ACW-SKD). ACW-SKD leverages an auxiliary head to generate coarse segmentation results, which are then utilized as prediction masks to refine class-wise features, followed by mitigating inter-class similarity via class-wise feature distillation. A feature reconstruction module (FRM) is introduced in the penultimate feature layer and class-wise feature layer to avoid plugging in multiple intermediate branches for constructing the next-stage learning objectives. Furthermore, adversarial temperature loss is incorporated to further recognize inter-class similarity by integrating a learnable temperature module. Extensive experiments are conducted on three benchmark datasets: Synapse, FLARE2022, and M2caiSeg. The results indicate that ACW-SKD surpasses several offline knowledge distillation methods, self-knowledge distillation methods, and U-Net networks. Ablation studies and visual analysis further validate the efficacy of ACW-SKD. This method notably enhances segmentation accuracy for challenging classes and mitigates the influence of inter-class similarities in medical image segmentation. Moreover, ACW-SKD delivers comparable results to U-Net with a reduced computational demand, positioning it as a viable option for deploying efficient medical image segmentation models on mobile devices. Our codes are available at https://github.com/shenjq77/ACW-SKD .

摘要

自知识蒸馏通过动态构建下一阶段的学习目标来实现知识转移,从而提供更有效的路径线索来优化紧凑的学生模型。挑战在于为即将到来的阶段制定有效的学习目标,以减轻医学图像分割中类间相似性的干扰。本文提出了一种对抗性逐类自知识蒸馏(ACW-SKD)方法。ACW-SKD利用一个辅助头生成粗略的分割结果,然后将其用作预测掩码来细化逐类特征,接着通过逐类特征蒸馏减轻类间相似性。在倒数第二层特征层和逐类特征层中引入了一个特征重建模块(FRM),以避免插入多个中间分支来构建下一阶段的学习目标。此外,引入了对抗性温度损失模块,通过集成一个可学习的温度模块来进一步识别类间相似性。在三个基准数据集上进行了广泛的实验:Synapse、FLARE2022和M2caiSeg。结果表明,ACW-SKD优于几种离线知识蒸馏方法、自知识蒸馏方法和U-Net网络。消融研究和可视化分析进一步验证了ACW-SKD的有效性。该方法显著提高了具有挑战性类别的分割精度,并减轻了医学图像分割中类间相似性的影响。此外,ACW-SKD在计算需求降低的情况下,与U-Net取得了相当的结果,使其成为在移动设备上部署高效医学图像分割模型的可行选择。我们的代码可在https://github.com/shenjq77/ACW-SKD获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/18ca59213cd3/41598_2025_98116_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/613910f76d72/41598_2025_98116_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/9f286cc09e0c/41598_2025_98116_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/ee3369f26241/41598_2025_98116_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/b11e7e695355/41598_2025_98116_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/025805861f80/41598_2025_98116_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/18ca59213cd3/41598_2025_98116_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/613910f76d72/41598_2025_98116_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/9f286cc09e0c/41598_2025_98116_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/ee3369f26241/41598_2025_98116_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/b11e7e695355/41598_2025_98116_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/025805861f80/41598_2025_98116_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/802a/12006537/18ca59213cd3/41598_2025_98116_Fig6_HTML.jpg

相似文献

1
Adversarial class-wise self-knowledge distillation for medical image segmentation.用于医学图像分割的对抗性类别自知识蒸馏
Sci Rep. 2025 Apr 17;15(1):13231. doi: 10.1038/s41598-025-98116-7.
2
Learnable Prompting SAM-Induced Knowledge Distillation for Semi-Supervised Medical Image Segmentation.用于半监督医学图像分割的可学习提示SAM诱导知识蒸馏
IEEE Trans Med Imaging. 2025 May;44(5):2295-2306. doi: 10.1109/TMI.2025.3530097. Epub 2025 May 2.
3
Feature distance-weighted adaptive decoupled knowledge distillation for medical image segmentation.用于医学图像分割的特征距离加权自适应解耦知识蒸馏
Int J Comput Assist Radiol Surg. 2025 Apr 22. doi: 10.1007/s11548-025-03346-9.
4
Efficient skin lesion segmentation with boundary distillation.基于边界蒸馏的高效皮肤病变分割。
Med Biol Eng Comput. 2024 Sep;62(9):2703-2716. doi: 10.1007/s11517-024-03095-y. Epub 2024 May 1.
5
ScribSD+: Scribble-supervised medical image segmentation based on simultaneous multi-scale knowledge distillation and class-wise contrastive regularization.ScribSD+:基于同时多尺度知识蒸馏和类内对比正则化的涂鸦监督医学图像分割。
Comput Med Imaging Graph. 2024 Sep;116:102416. doi: 10.1016/j.compmedimag.2024.102416. Epub 2024 Jul 9.
6
Voxel-wise adversarial semi-supervised learning for medical image segmentation.用于医学图像分割的体素级对抗半监督学习。
Comput Biol Med. 2022 Nov;150:106152. doi: 10.1016/j.compbiomed.2022.106152. Epub 2022 Sep 29.
7
SurgiNet: Pyramid Attention Aggregation and Class-wise Self-Distillation for Surgical Instrument Segmentation.SurgiNet:用于手术器械分割的金字塔注意力聚合和类别内自蒸馏。
Med Image Anal. 2022 Feb;76:102310. doi: 10.1016/j.media.2021.102310. Epub 2021 Dec 4.
8
MSKD: Structured knowledge distillation for efficient medical image segmentation.MSKD:用于高效医学图像分割的结构化知识蒸馏。
Comput Biol Med. 2023 Sep;164:107284. doi: 10.1016/j.compbiomed.2023.107284. Epub 2023 Aug 2.
9
Graph Flow: Cross-Layer Graph Flow Distillation for Dual Efficient Medical Image Segmentation.图流:用于双高效医学图像分割的跨层图流蒸馏
IEEE Trans Med Imaging. 2023 Apr;42(4):1159-1171. doi: 10.1109/TMI.2022.3224459. Epub 2023 Apr 3.
10
Continual Nuclei Segmentation via Prototype-Wise Relation Distillation and Contrastive Learning.通过原型关系蒸馏和对比学习进行连续细胞核分割。
IEEE Trans Med Imaging. 2023 Dec;42(12):3794-3804. doi: 10.1109/TMI.2023.3307892. Epub 2023 Nov 30.

本文引用的文献

1
Unleashing the strengths of unlabelled data in deep learning-assisted pan-cancer abdominal organ quantification: the FLARE22 challenge.在深度学习辅助的泛癌腹部器官定量分析中释放无标签数据的优势:FLARE22 挑战赛。
Lancet Digit Health. 2024 Nov;6(11):e815-e826. doi: 10.1016/S2589-7500(24)00154-7.
2
Resolution-Aware Knowledge Distillation for Efficient Inference.用于高效推理的分辨率感知知识蒸馏
IEEE Trans Image Process. 2021;30:6985-6996. doi: 10.1109/TIP.2021.3101158. Epub 2021 Aug 6.
3
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.
SegNet:一种用于图像分割的深度卷积编解码器架构。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615. Epub 2017 Jan 2.
4
Fully Convolutional Networks for Semantic Segmentation.全卷积网络用于语义分割。
IEEE Trans Pattern Anal Mach Intell. 2017 Apr;39(4):640-651. doi: 10.1109/TPAMI.2016.2572683. Epub 2016 May 24.