Suppr超能文献

一种使用YOLO-SaFi模型在非结构化环境中实时识别红花花丝的方法。

A Method for Real-Time Recognition of Safflower Filaments in Unstructured Environments Using the YOLO-SaFi Model.

作者信息

Chen Bangbang, Ding Feng, Ma Baojian, Wang Liqiang, Ning Shanping

机构信息

School of Mechatronic Engineering, Xi'an Technological University, Xi'an 710021, China.

School of Mechatronic Engineering, Xinjiang Institute of Technology, Aksu 843100, China.

出版信息

Sensors (Basel). 2024 Jul 8;24(13):4410. doi: 10.3390/s24134410.

Abstract

The identification of safflower filament targets and the precise localization of picking points are fundamental prerequisites for achieving automated filament retrieval. In light of challenges such as severe occlusion of targets, low recognition accuracy, and the considerable size of models in unstructured environments, this paper introduces a novel lightweight YOLO-SaFi model. The architectural design of this model features a Backbone layer incorporating the StarNet network; a Neck layer introducing a novel ELC convolution module to refine the C2f module; and a Head layer implementing a new lightweight shared convolution detection head, Detect_EL. Furthermore, the loss function is enhanced by upgrading CIoU to PIoUv2. These enhancements significantly augment the model's capability to perceive spatial information and facilitate multi-feature fusion, consequently enhancing detection performance and rendering the model more lightweight. Performance evaluations conducted via comparative experiments with the baseline model reveal that YOLO-SaFi achieved a reduction of parameters, computational load, and weight files by 50.0%, 40.7%, and 48.2%, respectively, compared to the YOLOv8 baseline model. Moreover, YOLO-SaFi demonstrated improvements in recall, mean average precision, and detection speed by 1.9%, 0.3%, and 88.4 frames per second, respectively. Finally, the deployment of the YOLO-SaFi model on the Jetson Orin Nano device corroborates the superior performance of the enhanced model, thereby establishing a robust visual detection framework for the advancement of intelligent safflower filament retrieval robots in unstructured environments.

摘要

红花花丝目标的识别以及采摘点的精确定位是实现花丝自动采摘的基本前提。鉴于非结构化环境中存在目标严重遮挡、识别准确率低以及模型规模较大等挑战,本文提出了一种新型轻量级YOLO-SaFi模型。该模型的架构设计包括:一个包含StarNet网络的骨干层;一个引入新型ELC卷积模块以优化C2f模块的颈部层;以及一个实现新型轻量级共享卷积检测头Detect_EL的头部层。此外,通过将CIoU升级为PIoUv2来增强损失函数。这些改进显著增强了模型感知空间信息的能力并促进多特征融合,从而提高了检测性能并使模型更轻量级。与基线模型进行对比实验的性能评估表明,与YOLOv8基线模型相比,YOLO-SaFi的参数、计算量和权重文件分别减少了50.0%、40.7%和48.2%。此外,YOLO-SaFi的召回率、平均精度均值和检测速度分别提高了1.9%、0.3%和每秒88.4帧。最后,YOLO-SaFi模型在Jetson Orin Nano设备上的部署证实了增强模型的卓越性能,从而为非结构化环境中智能红花花丝采摘机器人的发展建立了一个强大的视觉检测框架。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c8b1/11244584/bfe3c4fd8e7c/sensors-24-04410-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验