Shi Zhikang, Bai Ziwen, Yi Kechuan, Qiu Baijing, Dong Xiaoya, Wang Qingqing, Jiang Chunxia, Zhang Xinwei, Huang Xin
College of Intelligent Manufacturing, Anhui Science and Technology University, Chuzhou 239000, China.
Key Laboratory of Plant Protection Engineering, Ministry of Agriculture and Rural Affairs, Jiangsu University, Zhenjiang 212013, China.
Sensors (Basel). 2025 Sep 2;25(17):5432. doi: 10.3390/s25175432.
To address the insufficient accuracy of traditional single-sensor navigation methods in dense planting environments of pomegranate orchards, this paper proposes a vision and LiDAR fusion-based navigation line extraction method for orchard environments. The proposed method integrates a YOLOv8-ResCBAM trunk detection model, a reverse ray projection fusion algorithm, and geometric constraint-based navigation line fitting techniques. The object detection model enables high-precision real-time detection of pomegranate tree trunks. A reverse ray projection algorithm is proposed to convert pixel coordinates from visual detection into three-dimensional rays and compute their intersections with LiDAR scanning planes, achieving effective association between visual and LiDAR data. Finally, geometric constraints are introduced to improve the RANSAC algorithm for navigation line fitting, combined with Kalman filtering techniques to reduce navigation line fluctuations. Field experiments demonstrate that the proposed fusion-based navigation method improves navigation accuracy over single-sensor methods and semantic-segmentation methods, reducing the average lateral error to 5.2 cm, yielding an average lateral error RMS of 6.6 cm, and achieving a navigation success rate of 95.4%. These results validate the effectiveness of the vision and 2D LiDAR fusion-based approach in complex orchard environments and provide a viable route toward autonomous navigation for orchard robots.
针对传统单传感器导航方法在石榴园密植环境中精度不足的问题,本文提出了一种基于视觉与激光雷达融合的果园环境导航线提取方法。该方法集成了YOLOv8-ResCBAM树干检测模型、反向光线投影融合算法和基于几何约束的导航线拟合技术。目标检测模型能够对石榴树树干进行高精度实时检测。提出了一种反向光线投影算法,将视觉检测中的像素坐标转换为三维光线,并计算其与激光雷达扫描平面的交点,实现视觉数据与激光雷达数据的有效关联。最后,引入几何约束改进用于导航线拟合的RANSAC算法,并结合卡尔曼滤波技术减少导航线波动。田间实验表明,所提出的基于融合的导航方法比单传感器方法和语义分割方法提高了导航精度,将平均横向误差降低到5.2厘米,平均横向误差RMS为6.6厘米,导航成功率达到95.4%。这些结果验证了基于视觉与二维激光雷达融合的方法在复杂果园环境中的有效性,并为果园机器人自主导航提供了一条可行途径。