Zhang Jing, Deng Ruoling, Cai Chengzhi, Zou Erpeng, Liu Haitao, Hou Mingxin, Chen Xinzhi, Lin Huamin, Wei Zhenye
School of Mechanical Engineering, Guangdong Ocean University, Zhanjiang, Guangdong, China.
Guangdong Engineering Technology Research Center of Ocean Equipment and Manufacturing, Guangdong Ocean University, Zhanjiang, Guangdong, China.
Front Plant Sci. 2025 Jul 17;16:1604514. doi: 10.3389/fpls.2025.1604514. eCollection 2025.
The detection of lucky bamboo () nodes is a critical prerequisite for machining bamboo into high-value handicrafts. Current manual detection methods are inefficient, labor-intensive, and error-prone, necessitating an automated solution.
This study proposes an improved YOLOv7-based model for real-time, precise bamboo node detection. The model integrates a Squeeze-and-Excitation (SE) attention mechanism into the feature extraction network to enhance target localization and introduces a Weighted Intersection over Union (WIoU) loss function to optimize bounding box regression. A dataset of 2,000 annotated images (augmented from 1,000 originals) was constructed, covering diverse environmental conditions (e.g., blurred backgrounds, occlusions). Training was conducted on a server with an RTX 4090 GPU using PyTorch.
The proposed model achieved a 97.6% mAP@0.5, significantly outperforming the original YOLOv7 (83.4% mAP) by 14.2%, while maintaining the same inference speed (100.18 FPS). Compared to state-of-the-art alternatives, our model demonstrated superior efficiency. It showed 41.5% and 153% higher FPS than YOLOv11 (70.8 FPS) and YOLOv12 (39.54 FPS), respectively. Despite marginally lower mAP (≤1.3%) versus these models, the balanced trade-off between accuracy and speed makes it more suitable for industrial deployment. Robustness tests under challenging conditions (e.g., low light, occlusions) further validated its reliability, with consistent confidence scores across scenarios.
The proposed method significantly improves detection accuracy and efficiency, offering a viable tool for industrial applications in smart agriculture and handicraft production. Future work will address limitations in detecting nodes obscured by mottled patterns or severe occlusions by expanding label categories during training.
检测富贵竹()的节点是将竹子加工成高价值手工艺品的关键前提。当前的人工检测方法效率低下、劳动强度大且容易出错,因此需要一种自动化解决方案。
本研究提出了一种基于YOLOv7改进的模型,用于实时、精确地检测竹节。该模型将挤压与激励(SE)注意力机制集成到特征提取网络中,以增强目标定位,并引入加权交并比(WIoU)损失函数来优化边界框回归。构建了一个包含2000张标注图像(从1000张原始图像扩充而来)的数据集,涵盖了各种环境条件(如模糊背景、遮挡)。使用PyTorch在配备RTX 4090 GPU的服务器上进行训练。
所提出的模型在mAP@0.5指标上达到了97.6%,比原始的YOLOv7(83.4% mAP)显著高出14.2%,同时保持了相同的推理速度(100.18 FPS)。与现有最佳替代方案相比,我们的模型展示了更高的效率。它的FPS分别比YOLOv11(70.8 FPS)和YOLOv12(39.54 FPS)高出41.5%和153%。尽管与这些模型相比mAP略低(≤1.3%),但在准确性和速度之间的平衡权衡使其更适合工业部署。在具有挑战性的条件下(如低光照、遮挡)进行的鲁棒性测试进一步验证了其可靠性,不同场景下的置信度得分一致。
所提出的方法显著提高了检测精度和效率,为智能农业和手工艺品生产中的工业应用提供了一种可行的工具。未来的工作将通过在训练期间扩展标签类别来解决检测被斑驳图案或严重遮挡的节点时的局限性。