Suppr超能文献

YOLO-CFruit:一种用于复杂环境中水果的鲁棒目标检测方法。

YOLO-CFruit: a robust object detection method for fruit in complex environments.

作者信息

Luo Yuanyin, Liu Yang, Wang Haorui, Chen Haifei, Liao Kai, Li Lijun

机构信息

Engineering Research Center for Forestry Equipment of Hunan Province, Central South University of Forestry and Technology, Changsha, China.

Engineering Research Center for Smart Agricultural Machinery Beidou Navigation Adaptation Technology and Equipment of Hunan Province, Hunan Automotive Engineering Vocational University, Zhuzhou, China.

出版信息

Front Plant Sci. 2024 Aug 14;15:1389961. doi: 10.3389/fpls.2024.1389961. eCollection 2024.

Abstract

INTRODUCTION

In the field of agriculture, automated harvesting of fruit has become an important research area. However, accurately detecting fruit in a natural environment is a challenging task. The task of accurately detecting fruit in natural environments is complex due to factors such as shadows, which can impede the performance of traditional detection techniques, highlighting the need for more robust methods.

METHODS

To overcome these challenges, we propose an efficient deep learning method called YOLO-CFruit, which is specifically designed to accurately detect Camellia oleifera fruits in challenging natural environments. First, we collected images of fruits and created a dataset, and then used a data enhancement method to further enhance the diversity of the dataset. Our YOLO-CFruit model combines a CBAM module for identifying regions of interest in landscapes with Camellia oleifera fruit and a CSP module with Transformer for capturing global information. In addition, we improve YOLOCFruit by replacing the CIoU Loss with the EIoU Loss in the original YOLOv5.

RESULTS

By testing the training network, we find that the method performs well, achieving an average precision of 98.2%, a recall of 94.5%, an accuracy of 98%, an F1 score of 96.2, and a frame rate of 19.02 ms. The experimental results show that our method improves the average precision by 1.2% and achieves the highest accuracy and higher F1 score among all state-of-the-art networks compared to the conventional YOLOv5s network.

DISCUSSION

The robust performance of YOLO-CFruit under different real-world conditions, including different light and shading scenarios, signifies its high reliability and lays a solid foundation for the development of automated picking devices.

摘要

引言

在农业领域,水果的自动收获已成为一个重要的研究领域。然而,在自然环境中准确检测水果是一项具有挑战性的任务。由于阴影等因素,在自然环境中准确检测水果的任务很复杂,这会阻碍传统检测技术的性能,凸显了对更强大方法的需求。

方法

为了克服这些挑战,我们提出了一种名为YOLO-CFruit的高效深度学习方法,该方法专门设计用于在具有挑战性的自然环境中准确检测油茶果。首先,我们收集了水果图像并创建了一个数据集,然后使用数据增强方法进一步增强数据集的多样性。我们的YOLO-CFruit模型将用于识别带有油茶果的景观中感兴趣区域的CBAM模块与带有Transformer的CSP模块相结合,以捕获全局信息。此外,我们通过在原始YOLOv5中用EIoU损失替换CIoU损失来改进YOLOCFruit。

结果

通过测试训练网络,我们发现该方法表现良好,平均精度达到98.2%,召回率为94.5%,准确率为98%,F1分数为96.2,帧率为19.02毫秒。实验结果表明,与传统的YOLOv5s网络相比,我们的方法将平均精度提高了1.2%,并在所有现有最先进网络中实现了最高的准确率和更高的F1分数。

讨论

YOLO-CFruit在不同的现实世界条件下,包括不同的光照和阴影场景下的强大性能,表明了其高可靠性,并为自动采摘设备的开发奠定了坚实的基础。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5017/11443175/99de53520d05/fpls-15-1389961-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验