• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MSKD:用于高效医学图像分割的结构化知识蒸馏。

MSKD: Structured knowledge distillation for efficient medical image segmentation.

机构信息

College Of Information Science and Engineering, Northeastern University, Shenyang, Liaoning, China.

出版信息

Comput Biol Med. 2023 Sep;164:107284. doi: 10.1016/j.compbiomed.2023.107284. Epub 2023 Aug 2.

DOI:10.1016/j.compbiomed.2023.107284
PMID:37572439
Abstract

In recent years, deep learning has revolutionized the field of medical image segmentation by enabling the development of powerful deep neural networks. However, these models tend to be complex and computationally demanding, posing challenges for practical implementation in clinical settings. To address this issue, we propose an efficient structured knowledge distillation framework that leverages a powerful teacher network to assist in training a lightweight student network. Specifically, we propose the Feature Filtering Distillation method, which focuses on transferring region-level semantic information while minimizing redundant information transmission from the teacher to the student network. This approach effectively mitigates the problem of inaccurate segmentation caused by similar internal organ characteristics. Additionally, we propose the Region Graph Distillation method, which exploits the higher-order representational capabilities of graphs to enable the student network to better imitate structured semantic information from the teacher. To validate the effectiveness of our proposed methods, we conducted experiments on the Synapse multi-organ segmentation and KiTS kidney tumor segmentation datasets using various network models. The results demonstrate that our method significantly improves the segmentation performance of lightweight neural networks, with improvements of up to 18.56% in Dice coefficient. Importantly, our approach achieves these improvements without introducing additional model parameters. Overall, our proposed knowledge distillation methods offer a promising solution for efficient medical image segmentation, empowering medical experts to make more accurate diagnoses and improve patient treatment.

摘要

近年来,深度学习通过开发强大的深度神经网络,彻底改变了医学图像分割领域。然而,这些模型往往复杂且计算量大,在临床环境中实际实施时面临挑战。为了解决这个问题,我们提出了一种高效的结构化知识蒸馏框架,利用强大的教师网络来辅助训练轻量级的学生网络。具体来说,我们提出了特征过滤蒸馏方法,该方法侧重于传输区域级别的语义信息,同时最大限度地减少从教师网络向学生网络传输冗余信息。这种方法有效地减轻了由于相似内部器官特征导致的分割不准确的问题。此外,我们提出了区域图蒸馏方法,该方法利用图的高阶表示能力,使学生网络能够更好地模仿教师的结构化语义信息。为了验证我们提出的方法的有效性,我们在 Synapse 多器官分割和 KiTS 肾脏肿瘤分割数据集上使用各种网络模型进行了实验。结果表明,我们的方法显著提高了轻量级神经网络的分割性能,Dice 系数提高了高达 18.56%。重要的是,我们的方法在不引入额外模型参数的情况下实现了这些改进。总的来说,我们提出的知识蒸馏方法为高效的医学图像分割提供了一个有前途的解决方案,使医学专家能够做出更准确的诊断并改善患者的治疗效果。

相似文献

1
MSKD: Structured knowledge distillation for efficient medical image segmentation.MSKD:用于高效医学图像分割的结构化知识蒸馏。
Comput Biol Med. 2023 Sep;164:107284. doi: 10.1016/j.compbiomed.2023.107284. Epub 2023 Aug 2.
2
Efficient skin lesion segmentation with boundary distillation.基于边界蒸馏的高效皮肤病变分割。
Med Biol Eng Comput. 2024 Sep;62(9):2703-2716. doi: 10.1007/s11517-024-03095-y. Epub 2024 May 1.
3
Exploring Generalizable Distillation for Efficient Medical Image Segmentation.探索用于高效医学图像分割的通用蒸馏方法。
IEEE J Biomed Health Inform. 2024 Jul;28(7):4170-4183. doi: 10.1109/JBHI.2024.3385098.
4
Graph Flow: Cross-Layer Graph Flow Distillation for Dual Efficient Medical Image Segmentation.图流:用于双高效医学图像分割的跨层图流蒸馏
IEEE Trans Med Imaging. 2023 Apr;42(4):1159-1171. doi: 10.1109/TMI.2022.3224459. Epub 2023 Apr 3.
5
Efficient Medical Image Segmentation Based on Knowledge Distillation.基于知识蒸馏的高效医学图像分割。
IEEE Trans Med Imaging. 2021 Dec;40(12):3820-3831. doi: 10.1109/TMI.2021.3098703. Epub 2021 Nov 30.
6
ABUS tumor segmentation via decouple contrastive knowledge distillation.通过解耦对比知识蒸馏进行 ABUS 肿瘤分割。
Phys Med Biol. 2023 Dec 26;69(1). doi: 10.1088/1361-6560/ad1274.
7
G-MBRMD: Lightweight liver segmentation model based on guided teaching with multi-head boundary reconstruction mapping distillation.基于引导式教学的多头部边界重建映射蒸馏的轻量化肝脏分割模型。
Comput Biol Med. 2024 Aug;178:108733. doi: 10.1016/j.compbiomed.2024.108733. Epub 2024 Jun 18.
8
Efficient Multi-Organ Segmentation From 3D Abdominal CT Images With Lightweight Network and Knowledge Distillation.基于轻量化网络和知识蒸馏的 3D 腹部 CT 图像高效多器官分割。
IEEE Trans Med Imaging. 2023 Sep;42(9):2513-2523. doi: 10.1109/TMI.2023.3262680. Epub 2023 Aug 31.
9
Deep cross-modality (MR-CT) educed distillation learning for cone beam CT lung tumor segmentation.用于锥形束CT肺肿瘤分割的深度跨模态(MR-CT)诱导蒸馏学习
Med Phys. 2021 Jul;48(7):3702-3713. doi: 10.1002/mp.14902. Epub 2021 May 25.
10
Efficient Combination of CNN and Transformer for Dual-Teacher Uncertainty-guided Semi-supervised Medical Image Segmentation.基于 CNN 和 Transformer 的高效组合用于双教师不确定性引导的半监督医学图像分割。
Comput Methods Programs Biomed. 2022 Nov;226:107099. doi: 10.1016/j.cmpb.2022.107099. Epub 2022 Sep 2.

引用本文的文献

1
M3AE-Distill: An Efficient Distilled Model for Medical Vision-Language Downstream Tasks.M3AE-Distill:一种用于医学视觉语言下游任务的高效蒸馏模型。
Bioengineering (Basel). 2025 Jul 6;12(7):738. doi: 10.3390/bioengineering12070738.
2
Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation.利用多模态、多教师见解增强用于医学图像分割的高效深度学习模型。
Sci Rep. 2025 May 7;15(1):15948. doi: 10.1038/s41598-025-91430-0.
3
Automated Foveal Avascular Zone Segmentation in Optical Coherence Tomography Angiography Across Multiple Eye Diseases Using Knowledge Distillation.
利用知识蒸馏技术在多种眼部疾病的光学相干断层扫描血管造影中实现黄斑无血管区的自动分割
Bioengineering (Basel). 2025 Mar 23;12(4):334. doi: 10.3390/bioengineering12040334.
4
FM-FCN: A Neural Network with Filtering Modules for Accurate Vital Signs Extraction.FM-FCN:一种带有滤波模块的神经网络,用于精确提取生命体征。
Research (Wash D C). 2024 May 10;7:0361. doi: 10.34133/research.0361. eCollection 2024.