• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于重建特征和双重蒸馏的轻量化茶检测器学习。

Learning lightweight tea detector with reconstructed feature and dual distillation.

机构信息

School of Information and Artificial Intelligence, Anhui Agricultural University, 130 Changjiang West Road, Shushan District, Hefei City, Anhui Province, China.

Key Laboratory of Agricultural Sensors, Ministry of Agriculture and Rural Affairs, Hefei, China.

出版信息

Sci Rep. 2024 Oct 10;14(1):23669. doi: 10.1038/s41598-024-73674-4.

DOI:10.1038/s41598-024-73674-4
PMID:39390063
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11467173/
Abstract

Currently, image recognition based on deep neural networks has become the mainstream direction of research; therefore, significant progress has been made in its application in the field of tea detection. Many deep models exhibit high recognition rates in tea leaves detection. However, deploying these models directly on tea-picking equipment in natural environments is impractical; the extremely high parameters and computational complexity of these models make it challenging to perform real-time tea leaves detection. Meanwhile, lightweight models struggle to achieve competitive detection accuracy; therefore, this paper addresses the issue of computational resource constraints in remote mountain areas and proposes Reconstructed Feature and Dual Distillation (RFDD) to enhance the detection capability of lightweight models for tea leaves. In our method, the Reconstructed Feature selectively masks the feature of the student model based on the spatial attention map of the teacher model; it utilizes a generation block to force the student model to generate the teacher's full feature. The Dual Distillation comprises Decoupled Distillation and Global Distillation. Decoupled Distillation divides the reconstructed feature into foreground and background features based on the Ground-Truth. This compels the student model to allocate different attention to foreground and background, focusing on their critical pixels and channels. However, Decoupled Distillation leads to the loss of relation knowledge between foreground and background pixels. Therefore, we further perform Global Distillation to extract this lost knowledge. Since RFDD only requires loss calculation on feature map, it can be easily applied to various detectors. We conducted experiments on detectors with different frameworks, using a tea dataset collected at the Huangshan Houkui Tea Plantation. The experimental results indicate that, under the guidance of RFDD, the student detectors have achieved performance improvements to varying degrees. For instance, a one-stage detector like RetinaNet (ResNet-50) experienced a 3.14% increase in Average Precision (AP) after RFDD guidance. Similarly, a two-stage model like Faster RCNN (ResNet-50) obtained a 3.53% improvement in AP. This offers promising prospects for lightweight models to efficiently perform real-time tea leaves detection tasks.

摘要

目前,基于深度神经网络的图像识别已经成为研究的主流方向;因此,在茶叶检测领域的应用取得了重大进展。许多深度模型在茶叶检测中表现出很高的识别率。然而,将这些模型直接部署在自然环境中的采茶设备上是不切实际的;这些模型的参数和计算复杂度非常高,使得实时茶叶检测变得困难。同时,轻量级模型难以实现具有竞争力的检测精度;因此,本文针对偏远山区的计算资源限制问题,提出了重构特征和双蒸馏(RFDD)方法,以增强轻量级模型对茶叶的检测能力。在我们的方法中,重构特征根据教师模型的空间注意力图选择性地屏蔽学生模型的特征;它利用生成块迫使学生模型生成教师的全特征。双蒸馏包括去耦蒸馏和全局蒸馏。去耦蒸馏根据地面真值将重构特征分为前景特征和背景特征。这迫使学生模型对前景和背景分配不同的注意力,专注于它们的关键像素和通道。然而,去耦蒸馏导致前景和背景像素之间的关系知识丢失。因此,我们进一步进行全局蒸馏以提取这种丢失的知识。由于 RFDD 仅需要在特征图上进行损失计算,因此它可以很容易地应用于各种检测器。我们在使用黄山猴魁茶园采集的茶叶数据集的不同框架的检测器上进行了实验。实验结果表明,在 RFDD 的指导下,学生检测器的性能都有不同程度的提高。例如,像 RetinaNet(ResNet-50)这样的一阶段检测器在经过 RFDD 指导后平均精度(AP)提高了 3.14%。同样,像 Faster RCNN(ResNet-50)这样的两阶段模型的 AP 提高了 3.53%。这为轻量级模型高效执行实时茶叶检测任务提供了广阔的前景。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/8706913ce996/41598_2024_73674_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/3123ba9d3f94/41598_2024_73674_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/1841ccbc4ea9/41598_2024_73674_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/db0422b7ea2a/41598_2024_73674_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/9cf5cb2e18d5/41598_2024_73674_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/abc8cb0c6f43/41598_2024_73674_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/21207874466b/41598_2024_73674_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/f88e785b4643/41598_2024_73674_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/7b6c349eb330/41598_2024_73674_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/012bd2d83934/41598_2024_73674_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/07108eb7626a/41598_2024_73674_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/8706913ce996/41598_2024_73674_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/3123ba9d3f94/41598_2024_73674_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/1841ccbc4ea9/41598_2024_73674_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/db0422b7ea2a/41598_2024_73674_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/9cf5cb2e18d5/41598_2024_73674_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/abc8cb0c6f43/41598_2024_73674_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/21207874466b/41598_2024_73674_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/f88e785b4643/41598_2024_73674_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/7b6c349eb330/41598_2024_73674_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/012bd2d83934/41598_2024_73674_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/07108eb7626a/41598_2024_73674_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/559f/11467173/8706913ce996/41598_2024_73674_Fig11_HTML.jpg

相似文献

1
Learning lightweight tea detector with reconstructed feature and dual distillation.基于重建特征和双重蒸馏的轻量化茶检测器学习。
Sci Rep. 2024 Oct 10;14(1):23669. doi: 10.1038/s41598-024-73674-4.
2
Lightweight CNN combined with knowledge distillation for the accurate determination of black tea fermentation degree.轻量级卷积神经网络结合知识蒸馏技术实现红茶发酵程度的精准判定
Food Res Int. 2024 Oct;194:114929. doi: 10.1016/j.foodres.2024.114929. Epub 2024 Aug 18.
3
Inferior and Coordinate Distillation for Object Detectors.对象检测器的下推与坐标蒸馏。
Sensors (Basel). 2022 Jul 30;22(15):5719. doi: 10.3390/s22155719.
4
Structured Knowledge Distillation for Accurate and Efficient Object Detection.用于精确高效目标检测的结构化知识蒸馏
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):15706-15724. doi: 10.1109/TPAMI.2023.3300470. Epub 2023 Nov 3.
5
Cosine similarity-guided knowledge distillation for robust object detectors.用于鲁棒目标检测器的余弦相似度引导的知识蒸馏
Sci Rep. 2024 Aug 14;14(1):18888. doi: 10.1038/s41598-024-69813-6.
6
T-YOLO: a lightweight and efficient detection model for nutrient buds in complex tea-plantation environments.T-YOLO:一种适用于复杂茶园环境中芽苗检测的轻量级、高效检测模型。
J Sci Food Agric. 2024 Aug 15;104(10):5698-5711. doi: 10.1002/jsfa.13396. Epub 2024 Mar 4.
7
YOLOv8-RMDA: Lightweight YOLOv8 Network for Early Detection of Small Target Diseases in Tea.YOLOv8-RMDA:用于茶中早期检测小目标疾病的轻量级 YOLOv8 网络。
Sensors (Basel). 2024 May 1;24(9):2896. doi: 10.3390/s24092896.
8
Small object detection algorithm incorporating swin transformer for tea buds.用于茶芽的融合 Swin 变换小目标检测算法。
PLoS One. 2024 Mar 21;19(3):e0299902. doi: 10.1371/journal.pone.0299902. eCollection 2024.
9
Exploring Generalizable Distillation for Efficient Medical Image Segmentation.探索用于高效医学图像分割的通用蒸馏方法。
IEEE J Biomed Health Inform. 2024 Jul;28(7):4170-4183. doi: 10.1109/JBHI.2024.3385098.
10
Efficient skin lesion segmentation with boundary distillation.基于边界蒸馏的高效皮肤病变分割。
Med Biol Eng Comput. 2024 Sep;62(9):2703-2716. doi: 10.1007/s11517-024-03095-y. Epub 2024 May 1.