• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于轻型自动驾驶汽车无碰撞驾驶的增强视觉同步定位与地图构建

Enhanced Visual SLAM for Collision-Free Driving with Lightweight Autonomous Cars.

作者信息

Lin Zhihao, Tian Zhen, Zhang Qi, Zhuang Hanyang, Lan Jianglin

机构信息

James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK.

Faculty of Science, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands.

出版信息

Sensors (Basel). 2024 Sep 27;24(19):6258. doi: 10.3390/s24196258.

DOI:10.3390/s24196258
PMID:39409298
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11478337/
Abstract

The paper presents a vision-based obstacle avoidance strategy for lightweight self-driving cars that can be run on a CPU-only device using a single RGB-D camera. The method consists of two steps: visual perception and path planning. The visual perception part uses ORBSLAM3 enhanced with optical flow to estimate the car's poses and extract rich texture information from the scene. In the path planning phase, the proposed method employs a method combining a control Lyapunov function and control barrier function in the form of a quadratic program (CLF-CBF-QP) together with an obstacle shape reconstruction process (SRP) to plan safe and stable trajectories. To validate the performance and robustness of the proposed method, simulation experiments were conducted with a car in various complex indoor environments using the Gazebo simulation environment. The proposed method can effectively avoid obstacles in the scenes. The proposed algorithm outperforms benchmark algorithms in achieving more stable and shorter trajectories across multiple simulated scenes.

摘要

本文提出了一种基于视觉的轻型自动驾驶汽车避障策略,该策略可以使用单个RGB-D相机在仅配备CPU的设备上运行。该方法包括两个步骤:视觉感知和路径规划。视觉感知部分使用通过光流增强的ORBSLAM3来估计汽车的位姿,并从场景中提取丰富的纹理信息。在路径规划阶段,该方法采用一种将控制李雅普诺夫函数和控制障碍函数以二次规划(CLF-CBF-QP)的形式相结合的方法,以及一个障碍物形状重建过程(SRP)来规划安全稳定的轨迹。为了验证所提方法的性能和鲁棒性,使用Gazebo仿真环境在各种复杂室内环境下对一辆汽车进行了仿真实验。所提方法能够有效避免场景中的障碍物。在所实现的多个模拟场景中,所提算法在获得更稳定、更短的轨迹方面优于基准算法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/212a4553c72d/sensors-24-06258-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/955ede166bcd/sensors-24-06258-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/0debae1f0f45/sensors-24-06258-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/9a23d8d153bf/sensors-24-06258-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/338f2b9062e4/sensors-24-06258-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/be333a2c59b5/sensors-24-06258-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/3160e1d66d55/sensors-24-06258-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/18bdf07383c0/sensors-24-06258-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/d6e89118b3d6/sensors-24-06258-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/212a4553c72d/sensors-24-06258-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/955ede166bcd/sensors-24-06258-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/0debae1f0f45/sensors-24-06258-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/9a23d8d153bf/sensors-24-06258-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/338f2b9062e4/sensors-24-06258-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/be333a2c59b5/sensors-24-06258-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/3160e1d66d55/sensors-24-06258-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/18bdf07383c0/sensors-24-06258-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/d6e89118b3d6/sensors-24-06258-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b5c/11478337/212a4553c72d/sensors-24-06258-g009.jpg

相似文献

1
Enhanced Visual SLAM for Collision-Free Driving with Lightweight Autonomous Cars.用于轻型自动驾驶汽车无碰撞驾驶的增强视觉同步定位与地图构建
Sensors (Basel). 2024 Sep 27;24(19):6258. doi: 10.3390/s24196258.
2
A Dynamic Path-Planning Method for Obstacle Avoidance Based on the Driving Safety Field.一种基于驾驶安全场的动态避障路径规划方法。
Sensors (Basel). 2023 Nov 14;23(22):9180. doi: 10.3390/s23229180.
3
Research on obstacle avoidance optimization and path planning of autonomous vehicles based on attention mechanism combined with multimodal information decision-making thoughts of robots.基于注意力机制结合机器人多模态信息决策思想的自动驾驶车辆避障优化与路径规划研究
Front Neurorobot. 2023 Sep 22;17:1269447. doi: 10.3389/fnbot.2023.1269447. eCollection 2023.
4
Research and Implementation of Autonomous Navigation for Mobile Robots Based on SLAM Algorithm under ROS.基于ROS下SLAM算法的移动机器人自主导航研究与实现
Sensors (Basel). 2022 May 31;22(11):4172. doi: 10.3390/s22114172.
5
Robust and Efficient CPU-Based RGB-D Scene Reconstruction.基于 CPU 的鲁棒高效 RGB-D 场景重建。
Sensors (Basel). 2018 Oct 28;18(11):3652. doi: 10.3390/s18113652.
6
Multimodal intelligent logistics robot combining 3D CNN, LSTM, and visual SLAM for path planning and control.结合3D卷积神经网络、长短期记忆网络和视觉同步定位与地图构建技术进行路径规划与控制的多模态智能物流机器人。
Front Neurorobot. 2023 Oct 16;17:1285673. doi: 10.3389/fnbot.2023.1285673. eCollection 2023.
7
RGB-D SLAM Using Point-Plane Constraints for Indoor Environments.用于室内环境的基于点平面约束的RGB-D同步定位与地图构建
Sensors (Basel). 2019 Jun 17;19(12):2721. doi: 10.3390/s19122721.
8
A New Method for Classifying Scenes for Simultaneous Localization and Mapping Using the Boundary Object Function Descriptor on RGB-D Points.一种基于RGB-D点上的边界对象函数描述符对同时定位与地图构建场景进行分类的新方法。
Sensors (Basel). 2023 Oct 30;23(21):8836. doi: 10.3390/s23218836.
9
Collision Avoidance Path Planning and Tracking Control for Autonomous Vehicles Based on Model Predictive Control.基于模型预测控制的自动驾驶车辆避撞路径规划与跟踪控制
Sensors (Basel). 2024 Aug 12;24(16):5211. doi: 10.3390/s24165211.
10
Autonomous Exploration of Unknown Indoor Environments for High-Quality Mapping Using Feature-Based RGB-D SLAM.基于特征的 RGB-D SLAM 用于高质量建图的未知室内环境自主探索。
Sensors (Basel). 2022 Jul 7;22(14):5117. doi: 10.3390/s22145117.

引用本文的文献

1
Implementation of Visual Odometry on Jetson Nano.在 Jetson Nano 上实现视觉里程计
Sensors (Basel). 2025 Feb 9;25(4):1025. doi: 10.3390/s25041025.
2
Survey of Autonomous Vehicles' Collision Avoidance Algorithms.自动驾驶车辆避撞算法综述
Sensors (Basel). 2025 Jan 10;25(2):395. doi: 10.3390/s25020395.

本文引用的文献

1
Learning high-speed flight in the wild.在野外学习高速飞行。
Sci Robot. 2021 Oct 6;6(59):eabg5810. doi: 10.1126/scirobotics.abg5810.
2
Accurate Dynamic SLAM Using CRF-Based Long-Term Consistency.基于条件随机场的长期一致性的精确动态同步定位与地图构建
IEEE Trans Vis Comput Graph. 2022 Apr;28(4):1745-1757. doi: 10.1109/TVCG.2020.3028218. Epub 2022 Feb 25.
3
RGB-D SLAM in Dynamic Environments Using Point Correlations.基于点相关性的动态环境中的RGB-D同步定位与地图构建
IEEE Trans Pattern Anal Mach Intell. 2022 Jan;44(1):373-389. doi: 10.1109/TPAMI.2020.3010942. Epub 2021 Dec 7.