• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用生长辅助网络进行肝脏CT分割的高效知识蒸馏

Efficient knowledge distillation for liver CT segmentation using growing assistant network.

作者信息

Xu Pengcheng, Kim Kyungsang, Koh Jeongwan, Wu Dufan, Rim Lee Yu, Young Park Soo, Young Tak Won, Liu Huafeng, Li Quanzheng

机构信息

College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China.

Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America.

出版信息

Phys Med Biol. 2021 Nov 26;66(23). doi: 10.1088/1361-6560/ac3935.

DOI:10.1088/1361-6560/ac3935
PMID:34768246
Abstract

Segmentation has been widely used in diagnosis, lesion detection, and surgery planning. Although deep learning (DL)-based segmentation methods currently outperform traditional methods, most DL-based segmentation models are computationally expensive and memory inefficient, which are not suitable for the intervention of liver surgery. To address this issue, a simple solution is to make a segmentation model very small for the fast inference time, however, there is a trade-off between the model size and performance. In this paper, we propose a DL-based real-time 3-D liver CT segmentation method, where knowledge distillation (KD) method, known as knowledge transfer from teacher to student models, is incorporated to compress the model while preserving the performance. Because it is well known that the knowledge transfer is inefficient when the disparity of teacher and student model sizes is large, we propose a growing teacher assistant network (GTAN) to gradually learn the knowledge without extra computational cost, which can efficiently transfer knowledge even with the large gap of teacher and student model sizes. In our results, dice similarity coefficient of the student model with KD improved 1.2% (85.9% to 87.1%) compared to the student model without KD, which is a similar performance of the teacher model using only 8% (100k) parameters. Furthermore, with a student model of 2% (30k) parameters, the proposed model using the GTAN improved the dice coefficient about 2% compared to the student model without KD, and the inference time is 13 ms per a 3-D image. Therefore, the proposed method has a great potential for intervention in liver surgery as well as in many real-time applications.

摘要

分割技术已广泛应用于诊断、病变检测和手术规划。尽管基于深度学习(DL)的分割方法目前优于传统方法,但大多数基于DL的分割模型计算成本高且内存效率低,不适用于肝脏手术干预。为了解决这个问题,一个简单的解决方案是使分割模型非常小以实现快速推理时间,然而,模型大小和性能之间存在权衡。在本文中,我们提出了一种基于DL的实时三维肝脏CT分割方法,其中引入了知识蒸馏(KD)方法,即从教师模型到学生模型的知识转移,以在保持性能的同时压缩模型。由于众所周知,当教师模型和学生模型大小差异很大时,知识转移效率低下,我们提出了一种增长型教师辅助网络(GTAN),以在不增加额外计算成本的情况下逐步学习知识,即使教师模型和学生模型大小差距很大,也能有效地转移知识。在我们的结果中,与没有KD的学生模型相比,使用KD的学生模型的骰子相似系数提高了1.2%(从85.9%提高到87.1%),这与仅使用8%(100k)参数的教师模型的性能相似。此外,对于一个具有2%(30k)参数的学生模型,与没有KD的学生模型相比,使用GTAN的所提出模型的骰子系数提高了约2%,并且每幅三维图像的推理时间为13毫秒。因此,所提出的方法在肝脏手术干预以及许多实时应用中具有很大的潜力。

相似文献

1
Efficient knowledge distillation for liver CT segmentation using growing assistant network.使用生长辅助网络进行肝脏CT分割的高效知识蒸馏
Phys Med Biol. 2021 Nov 26;66(23). doi: 10.1088/1361-6560/ac3935.
2
G-MBRMD: Lightweight liver segmentation model based on guided teaching with multi-head boundary reconstruction mapping distillation.基于引导式教学的多头部边界重建映射蒸馏的轻量化肝脏分割模型。
Comput Biol Med. 2024 Aug;178:108733. doi: 10.1016/j.compbiomed.2024.108733. Epub 2024 Jun 18.
3
Knowledge distillation on individual vertebrae segmentation exploiting 3D U-Net.利用 3D U-Net 对个体椎体分割进行知识蒸馏。
Comput Med Imaging Graph. 2024 Apr;113:102350. doi: 10.1016/j.compmedimag.2024.102350. Epub 2024 Feb 8.
4
MSKD: Structured knowledge distillation for efficient medical image segmentation.MSKD:用于高效医学图像分割的结构化知识蒸馏。
Comput Biol Med. 2023 Sep;164:107284. doi: 10.1016/j.compbiomed.2023.107284. Epub 2023 Aug 2.
5
Light-M: An efficient lightweight medical image segmentation framework for resource-constrained IoMT.轻量级医学图像分割框架 Light-M,适用于资源受限的 IoMT。
Comput Biol Med. 2024 Mar;170:108088. doi: 10.1016/j.compbiomed.2024.108088. Epub 2024 Feb 3.
6
ABUS tumor segmentation via decouple contrastive knowledge distillation.通过解耦对比知识蒸馏进行 ABUS 肿瘤分割。
Phys Med Biol. 2023 Dec 26;69(1). doi: 10.1088/1361-6560/ad1274.
7
FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation.FCKDNet:一种用于语义分割的特征压缩知识蒸馏网络。
Entropy (Basel). 2023 Jan 7;25(1):125. doi: 10.3390/e25010125.
8
Knowledge distillation with ensembles of convolutional neural networks for medical image segmentation.用于医学图像分割的卷积神经网络集成知识蒸馏
J Med Imaging (Bellingham). 2022 Sep;9(5):052407. doi: 10.1117/1.JMI.9.5.052407. Epub 2022 May 28.
9
Leveraging different learning styles for improved knowledge distillation in biomedical imaging.利用不同的学习方式提高生物医学成像中的知识蒸馏效果。
Comput Biol Med. 2024 Jan;168:107764. doi: 10.1016/j.compbiomed.2023.107764. Epub 2023 Nov 30.
10
Resolution-Aware Knowledge Distillation for Efficient Inference.用于高效推理的分辨率感知知识蒸馏
IEEE Trans Image Process. 2021;30:6985-6996. doi: 10.1109/TIP.2021.3101158. Epub 2021 Aug 6.

引用本文的文献

1
Magnetic Resonance Imaging Liver Segmentation Protocol Enables More Consistent and Robust Annotations, Paving the Way for Advanced Computer-Assisted Analysis.磁共振成像肝脏分割协议可实现更一致、更可靠的标注,为先进的计算机辅助分析铺平道路。
Diagnostics (Basel). 2024 Dec 11;14(24):2785. doi: 10.3390/diagnostics14242785.
2
A Multifunctional Network with Uncertainty Estimation and Attention-Based Knowledge Distillation to Address Practical Challenges in Respiration Rate Estimation.一种具有不确定性估计和基于注意力的知识蒸馏的多功能网络,用于解决呼吸率估计中的实际挑战。
Sensors (Basel). 2023 Feb 1;23(3):1599. doi: 10.3390/s23031599.
3
Knowledge distillation with ensembles of convolutional neural networks for medical image segmentation.
用于医学图像分割的卷积神经网络集成知识蒸馏
J Med Imaging (Bellingham). 2022 Sep;9(5):052407. doi: 10.1117/1.JMI.9.5.052407. Epub 2022 May 28.