• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

轻量级医学图像分割框架 Light-M,适用于资源受限的 IoMT。

Light-M: An efficient lightweight medical image segmentation framework for resource-constrained IoMT.

机构信息

Shenzhen University, 3688 Nanhai Ave., Shenzhen, 518060, Guangdong, China.

Shenzhen University, 3688 Nanhai Ave., Shenzhen, 518060, Guangdong, China.

出版信息

Comput Biol Med. 2024 Mar;170:108088. doi: 10.1016/j.compbiomed.2024.108088. Epub 2024 Feb 3.

DOI:10.1016/j.compbiomed.2024.108088
PMID:38320339
Abstract

The Internet of Medical Things (IoMT) is being incorporated into current healthcare systems. This technology intends to connect patients, IoMT devices, and hospitals over mobile networks, allowing for more secure, quick, and convenient health monitoring and intelligent healthcare services. However, existing intelligent healthcare applications typically rely on large-scale AI models, and standard IoMT devices have significant resource constraints. To alleviate this paradox, in this paper, we propose a Knowledge Distillation (KD)-based IoMT end-edge-cloud orchestrated architecture for medical image segmentation tasks, called Light-M, aiming to deploy a lightweight medical model in resource-constrained IoMT devices. Specifically, Light-M trains a large teacher model in the cloud server and employs computation in local nodes through imitation of the performance of the teacher model using knowledge distillation. Light-M contains two KD strategies: (1) active exploration and passive transfer (AEPT) and (2) self-attention-based inter-class feature variation (AIFV) distillation for the medical image segmentation task. The AEPT encourages the student model to learn undiscovered knowledge/features of the teacher model without additional feature layers, aiming to explore new features and outperform the teacher. To improve the distinguishability of the student for different classes, the student learns the self-attention-based feature variation (AIFV) between classes. Since the proposed AEPT and AIFV only appear in the training process, our framework does not involve any additional computation burden for a student model during the segmentation task deployment. Extensive experiments on cardiac images and public real-scene datasets demonstrate that our approach improves student model learning representations and outperforms state-of-the-art methods by combining two knowledge distillation strategies. Moreover, when deployed on the IoT device, the distilled student model takes only 29.6 ms for one sample at the inference step.

摘要

物联网医疗(IoMT)正被整合到当前的医疗保健系统中。这项技术旨在通过移动网络将患者、IoMT 设备和医院连接起来,从而实现更安全、更快速和更便捷的健康监测和智能医疗服务。然而,现有的智能医疗应用程序通常依赖于大规模 AI 模型,而标准的 IoMT 设备资源有限。为了解决这个矛盾,本文提出了一种基于知识蒸馏(KD)的 IoMT 端边云协同架构,用于医学图像分割任务,称为 Light-M,旨在将轻量级医疗模型部署到资源受限的 IoMT 设备中。具体来说,Light-M 在云服务器中训练一个大型教师模型,并通过知识蒸馏模仿教师模型的性能在本地节点中进行计算。Light-M 包含两种 KD 策略:(1)主动探索和被动转移(AEPT)和(2)基于自注意力的类间特征变化(AIFV)蒸馏,用于医学图像分割任务。AEPT 鼓励学生模型在不增加额外特征层的情况下学习教师模型的未发现知识/特征,旨在探索新的特征并超越教师。为了提高学生对不同类别的可区分性,学生学习类间基于自注意力的特征变化(AIFV)。由于所提出的 AEPT 和 AIFV 仅出现在训练过程中,因此我们的框架在分割任务部署期间不会给学生模型增加任何额外的计算负担。在心脏图像和公共真实场景数据集上的广泛实验表明,我们的方法通过结合两种知识蒸馏策略来提高学生模型的学习表示能力,并优于最先进的方法。此外,当部署在物联网设备上时,蒸馏后的学生模型在推理步骤中每个样本只需 29.6 毫秒。

相似文献

1
Light-M: An efficient lightweight medical image segmentation framework for resource-constrained IoMT.轻量级医学图像分割框架 Light-M,适用于资源受限的 IoMT。
Comput Biol Med. 2024 Mar;170:108088. doi: 10.1016/j.compbiomed.2024.108088. Epub 2024 Feb 3.
2
Efficient skin lesion segmentation with boundary distillation.基于边界蒸馏的高效皮肤病变分割。
Med Biol Eng Comput. 2024 Sep;62(9):2703-2716. doi: 10.1007/s11517-024-03095-y. Epub 2024 May 1.
3
MSKD: Structured knowledge distillation for efficient medical image segmentation.MSKD:用于高效医学图像分割的结构化知识蒸馏。
Comput Biol Med. 2023 Sep;164:107284. doi: 10.1016/j.compbiomed.2023.107284. Epub 2023 Aug 2.
4
Leveraging different learning styles for improved knowledge distillation in biomedical imaging.利用不同的学习方式提高生物医学成像中的知识蒸馏效果。
Comput Biol Med. 2024 Jan;168:107764. doi: 10.1016/j.compbiomed.2023.107764. Epub 2023 Nov 30.
5
Feature distance-weighted adaptive decoupled knowledge distillation for medical image segmentation.用于医学图像分割的特征距离加权自适应解耦知识蒸馏
Int J Comput Assist Radiol Surg. 2025 Apr 22. doi: 10.1007/s11548-025-03346-9.
6
[A joint distillation model for the tumor segmentation using breast ultrasound images].[一种用于利用乳腺超声图像进行肿瘤分割的联合蒸馏模型]
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2025 Feb 25;42(1):148-155. doi: 10.7507/1001-5515.202311054.
7
ScribSD+: Scribble-supervised medical image segmentation based on simultaneous multi-scale knowledge distillation and class-wise contrastive regularization.ScribSD+:基于同时多尺度知识蒸馏和类内对比正则化的涂鸦监督医学图像分割。
Comput Med Imaging Graph. 2024 Sep;116:102416. doi: 10.1016/j.compmedimag.2024.102416. Epub 2024 Jul 9.
8
Exploring Generalizable Distillation for Efficient Medical Image Segmentation.探索用于高效医学图像分割的通用蒸馏方法。
IEEE J Biomed Health Inform. 2024 Jul;28(7):4170-4183. doi: 10.1109/JBHI.2024.3385098.
9
Efficient knowledge distillation for liver CT segmentation using growing assistant network.使用生长辅助网络进行肝脏CT分割的高效知识蒸馏
Phys Med Biol. 2021 Nov 26;66(23). doi: 10.1088/1361-6560/ac3935.
10
PMFSNet: Polarized multi-scale feature self-attention network for lightweight medical image segmentation.PMFSNet:用于轻量级医学图像分割的极化多尺度特征自注意力网络
Comput Methods Programs Biomed. 2025 Apr;261:108611. doi: 10.1016/j.cmpb.2025.108611. Epub 2025 Jan 25.

引用本文的文献

1
DEFIF-Net: A lightweight dual-encoding feature interaction fusion network for medical image segmentation.DEFIF-Net:一种用于医学图像分割的轻量级双编码特征交互融合网络。
PLoS One. 2025 May 29;20(5):e0324861. doi: 10.1371/journal.pone.0324861. eCollection 2025.