Suppr超能文献

融合GhostNet与YOLOv5的轻量级茶芽识别网络

Lightweight tea bud recognition network integrating GhostNet and YOLOv5.

作者信息

Cao Miaolong, Fu Hao, Zhu Jiayi, Cai Chenggang

机构信息

School of Mechanical and Energy Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China.

School of Biological and Chemical Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China.

出版信息

Math Biosci Eng. 2022 Sep 5;19(12):12897-12914. doi: 10.3934/mbe.2022602.

Abstract

Aiming at the problems of low detection accuracy and slow speed caused by the complex background of tea sprouts and the small target size, this paper proposes a tea bud detection algorithm integrating GhostNet and YOLOv5. To reduce parameters, the GhostNet module is specially introduced to shorten the detection speed. A coordinated attention mechanism is then added to the backbone layer to enhance the feature extraction ability of the model. A bi-directional feature pyramid network (BiFPN) is used in the neck layer of feature fusion to increase the fusion between shallow and deep networks to improve the detection accuracy of small objects. Efficient intersection over union (EIOU) is used as a localization loss to improve the detection accuracy in the end. The experimental results show that the precision of GhostNet-YOLOv5 is 76.31%, which is 1.31, 4.83, and 3.59% higher than that of Faster RCNN, YOLOv5 and YOLOv5-Lite respectively. By comparing the actual detection effects of GhostNet-YOLOv5 and YOLOv5 algorithm on buds in different quantities, different shooting angles, and different illumination angles, and taking F1 score as the evaluation value, the results show that GhostNet-YOLOv5 is 7.84, 2.88, and 3.81% higher than YOLOv5 algorithm in these three different environments.

摘要

针对茶芽背景复杂、目标尺寸小导致检测精度低、速度慢的问题,本文提出了一种融合GhostNet和YOLOv5的茶芽检测算法。为减少参数,特别引入GhostNet模块以缩短检测速度。然后在主干层添加协同注意力机制,增强模型的特征提取能力。在特征融合的颈部层使用双向特征金字塔网络(BiFPN),增加浅层和深层网络之间的融合,以提高小目标的检测精度。最后使用高效交并比(EIOU)作为定位损失来提高检测精度。实验结果表明,GhostNet-YOLOv5的精度为76.31%,分别比Faster RCNN、YOLOv5和YOLOv5-Lite高1.31%、4.83%和3.59%。通过比较GhostNet-YOLOv5和YOLOv5算法在不同数量、不同拍摄角度和不同光照角度下对茶芽的实际检测效果,并以F1分数作为评估值,结果表明GhostNet-YOLOv5在这三种不同环境下分别比YOLOv5算法高7.84%、2.88%和3.81%。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验