• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多尺度特征融合与增强的葡萄串检测

Grape clusters detection based on multi-scale feature fusion and augmentation.

作者信息

Ma Jinlin, Xu Silong, Ma Ziping, Fu Hong, Lin Baobao

机构信息

School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China.

Key Laboratory of Images and Graphics Intelligent Processing of National Ethnic Affairs Commission, North Minzu University, Yinchuan, 750021, China.

出版信息

Sci Rep. 2024 Sep 30;14(1):22701. doi: 10.1038/s41598-024-72727-y.

DOI:10.1038/s41598-024-72727-y
PMID:39349599
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11443102/
Abstract

This paper addresses the challenge of low detection accuracy of grape clusters caused by scale differences, illumination changes, and occlusion in realistic and complex scenes. We propose a multi-scale feature fusion and augmentation YOLOv7 network to enhance the detection accuracy of grape clusters across variable environments. First, we design a Multi-Scale Feature Extraction Module (MSFEM) to enhance feature extraction for small-scale targets. Second, we propose the Receptive Field Augmentation Module (RFAM), which uses dilated convolution to expand the receptive field and enhance the detection accuracy for objects of various scales. Third, we present the Spatial Pyramid Pooling Cross Stage Partial Concatenation Faster (SPPCSPCF) module to fuse multi-scale features, improving accuracy and speeding up model training. Finally, we integrate the Residual Global Attention Mechanism (ResGAM) into the network to better focus on crucial regions and features. Experimental results show that our proposed method achieves a mAP of 93.29% on the GrappoliV2 dataset, an improvement of 5.39% over YOLOv7. Additionally, our method increases Precision, Recall, and F1 score by 2.83%, 3.49%, and 0.07, respectively. Compared to state-of-the-art detection methods, our approach demonstrates superior detection performance and adaptability to various environments for detecting grape clusters.

摘要

本文针对现实复杂场景中因尺度差异、光照变化和遮挡导致葡萄串检测准确率低的挑战。我们提出了一种多尺度特征融合与增强的YOLOv7网络,以提高在不同环境下葡萄串的检测准确率。首先,我们设计了一个多尺度特征提取模块(MSFEM)来增强对小尺度目标的特征提取。其次,我们提出了感受野增强模块(RFAM),它使用空洞卷积来扩大感受野,提高对各种尺度物体的检测准确率。第三,我们提出了空间金字塔池化跨阶段部分拼接更快(SPPCSPCF)模块来融合多尺度特征,提高准确率并加速模型训练。最后,我们将残差全局注意力机制(ResGAM)集成到网络中,以更好地聚焦关键区域和特征。实验结果表明,我们提出的方法在GrappoliV2数据集上的平均精度均值(mAP)达到了93.29%,比YOLOv7提高了5.39%。此外,我们的方法的精确率、召回率和F1分数分别提高了2.83%、3.49%和0.07。与当前最先进的检测方法相比,我们的方法在检测葡萄串时表现出卓越的检测性能和对各种环境的适应性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/143555d7841c/41598_2024_72727_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/ab3266626b5e/41598_2024_72727_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/6d9a8e1bdaf5/41598_2024_72727_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/935ce645b391/41598_2024_72727_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/e7bd6005e016/41598_2024_72727_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/80533855cec5/41598_2024_72727_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/1aebe17ebf9d/41598_2024_72727_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/012fa4aefd4c/41598_2024_72727_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/b9cffc15cd25/41598_2024_72727_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/3624efd078de/41598_2024_72727_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/3405b52c0cf7/41598_2024_72727_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/0072b48a9c1b/41598_2024_72727_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/143555d7841c/41598_2024_72727_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/ab3266626b5e/41598_2024_72727_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/6d9a8e1bdaf5/41598_2024_72727_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/935ce645b391/41598_2024_72727_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/e7bd6005e016/41598_2024_72727_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/80533855cec5/41598_2024_72727_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/1aebe17ebf9d/41598_2024_72727_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/012fa4aefd4c/41598_2024_72727_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/b9cffc15cd25/41598_2024_72727_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/3624efd078de/41598_2024_72727_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/3405b52c0cf7/41598_2024_72727_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/0072b48a9c1b/41598_2024_72727_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5e67/11443102/143555d7841c/41598_2024_72727_Fig12_HTML.jpg

相似文献

1
Grape clusters detection based on multi-scale feature fusion and augmentation.基于多尺度特征融合与增强的葡萄串检测
Sci Rep. 2024 Sep 30;14(1):22701. doi: 10.1038/s41598-024-72727-y.
2
Tomato leaf disease detection based on attention mechanism and multi-scale feature fusion.基于注意力机制和多尺度特征融合的番茄叶部病害检测
Front Plant Sci. 2024 Apr 9;15:1382802. doi: 10.3389/fpls.2024.1382802. eCollection 2024.
3
Multi-scale feature fusion for pavement crack detection based on Transformer.基于Transformer的路面裂缝检测多尺度特征融合
Math Biosci Eng. 2023 Jul 11;20(8):14920-14937. doi: 10.3934/mbe.2023668.
4
Lightweight high-precision SAR ship detection method based on YOLOv7-LDS.基于 YOLOv7-LDS 的轻量级高精度 SAR 船舶检测方法。
PLoS One. 2024 Feb 13;19(2):e0296992. doi: 10.1371/journal.pone.0296992. eCollection 2024.
5
YOLOv7-TS: A Traffic Sign Detection Model Based on Sub-Pixel Convolution and Feature Fusion.YOLOv7-TS:一种基于子像素卷积和特征融合的交通标志检测模型。
Sensors (Basel). 2024 Feb 3;24(3):989. doi: 10.3390/s24030989.
6
Weed detection and recognition in complex wheat fields based on an improved YOLOv7.基于改进YOLOv7的复杂麦田杂草检测与识别
Front Plant Sci. 2024 Jun 24;15:1372237. doi: 10.3389/fpls.2024.1372237. eCollection 2024.
7
A Multi-Scale Natural Scene Text Detection Method Based on Attention Feature Extraction and Cascade Feature Fusion.一种基于注意力特征提取和级联特征融合的多尺度自然场景文本检测方法
Sensors (Basel). 2024 Jun 9;24(12):3758. doi: 10.3390/s24123758.
8
Automatic detection method for tobacco beetles combining multi-scale global residual feature pyramid network and dual-path deformable attention.基于多尺度全局残差特征金字塔网络和双通道变形注意力的烟草甲自动检测方法。
Sci Rep. 2024 Feb 28;14(1):4862. doi: 10.1038/s41598-024-55347-4.
9
[Fully Automatic Glioma Segmentation Algorithm of Magnetic Resonance Imaging Based on 3D-UNet With More Global Contextual Feature Extraction: An Improvement on Insufficient Extraction of Global Features].基于具有更多全局上下文特征提取的3D-UNet的磁共振成像全自动胶质瘤分割算法:对全局特征提取不足的改进
Sichuan Da Xue Xue Bao Yi Xue Ban. 2024 Mar 20;55(2):447-454. doi: 10.12182/20240360208.
10
Small Object Detection in Traffic Scenes Based on Attention Feature Fusion.基于注意力特征融合的交通场景小目标检测。
Sensors (Basel). 2021 Apr 26;21(9):3031. doi: 10.3390/s21093031.

本文引用的文献

1
GrapeMOTS: UAV vineyard dataset with MOTS grape bunch annotations recorded from multiple perspectives for enhanced object detection and tracking.葡萄多目标跟踪数据集(GrapeMOTS):具有多目标跟踪葡萄串注释的无人机葡萄园数据集,从多个视角记录,用于增强目标检测和跟踪。
Data Brief. 2024 Apr 16;54:110432. doi: 10.1016/j.dib.2024.110432. eCollection 2024 Jun.
2
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
3
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.
空间金字塔池化在深度卷积网络中的视觉识别。
IEEE Trans Pattern Anal Mach Intell. 2015 Sep;37(9):1904-16. doi: 10.1109/TPAMI.2015.2389824.