• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于飞行时间(ToF)相机的非结构化环境中的障碍物分类与三维测量

Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

作者信息

Yu Hongshan, Zhu Jiang, Wang Yaonan, Jia Wenyan, Sun Mingui, Tang Yandong

机构信息

College of Electrical and Information Engineering, Hunan University, Changsha 410082, China.

Laboratory for Computational Neuroscience, University of Pittsburgh, Pittsburgh, PA 15213, USA.

出版信息

Sensors (Basel). 2014 Jun 18;14(6):10753-82. doi: 10.3390/s140610753.

DOI:10.3390/s140610753
PMID:24945679
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC4118419/
Abstract

Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient.

摘要

受人类3D视觉感知系统的启发,我们提出了一种基于飞行时间(ToF)相机的障碍物检测与分类方法,用于非结构化环境中的机器人导航。ToF相机通过捕获图像以及每个像素的3D空间信息来提供3D传感。基于这一宝贵特性和人类的导航知识,所提出的方法首先从场景中去除不影响机器人运动的无关区域。第二步,利用ToF相机获得的3D信息和强度图像,检测感兴趣区域并将其聚类为可能的障碍物。因此,设计了一种多相关向量机(RVM)分类器,根据障碍物的地形可穿越性和几何特征将障碍物分为四种可能的类别。最后,给出了在各种非结构化环境中的实验结果,以验证所提方法的鲁棒性和性能。我们发现,与现有的障碍物识别方法相比,新方法更加准确和高效。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/226a7572c304/sensors-14-10753f14a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/381daae8e873/sensors-14-10753f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/dfafdcd83902/sensors-14-10753f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/2228f4e43de8/sensors-14-10753f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/8169de63f57a/sensors-14-10753f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/0e89613c44ef/sensors-14-10753f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/235cb56529b0/sensors-14-10753f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/2eee9e627d5a/sensors-14-10753f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/9eaf3fb3305d/sensors-14-10753f8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/3e1d0bf2a5e6/sensors-14-10753f9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/0fff78b75c3d/sensors-14-10753f10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/45f4885831da/sensors-14-10753f11a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/6f2d2d12c655/sensors-14-10753f12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/791dc14921ed/sensors-14-10753f13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/226a7572c304/sensors-14-10753f14a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/381daae8e873/sensors-14-10753f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/dfafdcd83902/sensors-14-10753f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/2228f4e43de8/sensors-14-10753f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/8169de63f57a/sensors-14-10753f4.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/0e89613c44ef/sensors-14-10753f5.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/235cb56529b0/sensors-14-10753f6.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/2eee9e627d5a/sensors-14-10753f7.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/9eaf3fb3305d/sensors-14-10753f8.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/3e1d0bf2a5e6/sensors-14-10753f9.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/0fff78b75c3d/sensors-14-10753f10.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/45f4885831da/sensors-14-10753f11a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/6f2d2d12c655/sensors-14-10753f12.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/791dc14921ed/sensors-14-10753f13.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/08fa/4118419/226a7572c304/sensors-14-10753f14a.jpg

相似文献

1
Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.基于飞行时间(ToF)相机的非结构化环境中的障碍物分类与三维测量
Sensors (Basel). 2014 Jun 18;14(6):10753-82. doi: 10.3390/s140610753.
2
Dense range map reconstruction from a versatile robotic sensor system with an active trinocular vision and a passive binocular vision.基于具有主动三目视觉和被动双目视觉的通用机器人传感器系统的密集距离图重建。
Appl Opt. 2008 Apr 10;47(11):1927-39. doi: 10.1364/ao.47.001927.
3
Rapid biologically-inspired scene classification using features shared with visual attention.利用与视觉注意力共享的特征进行快速生物启发式场景分类。
IEEE Trans Pattern Anal Mach Intell. 2007 Feb;29(2):300-12. doi: 10.1109/TPAMI.2007.40.
4
HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.HOPIS:用于移动机器人的混合全向和透视成像系统。
Sensors (Basel). 2014 Sep 4;14(9):16508-31. doi: 10.3390/s140916508.
5
A new active visual system for humanoid robots.一种用于类人机器人的新型主动视觉系统。
IEEE Trans Syst Man Cybern B Cybern. 2008 Apr;38(2):320-30. doi: 10.1109/TSMCB.2007.912082.
6
A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.用于人形机器人的多特征多内核场景分割算法。
IEEE Trans Cybern. 2014 Nov;44(11):2232-41. doi: 10.1109/TSMC.2013.2297398. Epub 2014 Jan 20.
7
A systolic algorithm for Euclidean distance transform.一种用于欧几里得距离变换的收缩算法。
IEEE Trans Pattern Anal Mach Intell. 2006 Jul;28(7):1127-34. doi: 10.1109/TPAMI.2006.133.
8
Visual control of robots using range images.使用距离图像进行机器人的视觉控制。
Sensors (Basel). 2010;10(8):7303-22. doi: 10.3390/s100807303. Epub 2010 Aug 4.
9
Using sensor habituation in mobile robots to reduce oscillatory movements in narrow corridors.利用移动机器人中的传感器习惯化来减少狭窄走廊中的振荡运动。
IEEE Trans Neural Netw. 2005 Nov;16(6):1582-9. doi: 10.1109/TNN.2005.853714.
10
3D steering of a flexible needle by visual servoing.
Med Image Comput Comput Assist Interv. 2014;17(Pt 1):480-7. doi: 10.1007/978-3-319-10404-1_60.

引用本文的文献

1
State-of-the-Art Review on Wearable Obstacle Detection Systems Developed for Assistive Technologies and Footwear.用于辅助技术和鞋类的可穿戴障碍物检测系统的最新研究综述
Sensors (Basel). 2023 Mar 3;23(5):2802. doi: 10.3390/s23052802.
2
Towards Accurate Ground Plane Normal Estimation from Ego-Motion.从自身运动中准确估计地面平面法向量。
Sensors (Basel). 2022 Dec 1;22(23):9375. doi: 10.3390/s22239375.
3
Novel Laser-Based Obstacle Detection for Autonomous Robots on Unstructured Terrain.用于非结构化地形上自主机器人的新型激光障碍物检测

本文引用的文献

1
Serial and parallel processing of visual feature conjunctions.视觉特征联结的串行与并行处理
Nature. 1986;320(6059):264-5. doi: 10.1038/320264a0.
Sensors (Basel). 2020 Sep 5;20(18):5048. doi: 10.3390/s20185048.
4
Pixel-Wise Crack Detection Using Deep Local Pattern Predictor for Robot Application.基于深度局部模式预测器的像素级裂缝检测及其在机器人中的应用。
Sensors (Basel). 2018 Sep 11;18(9):3042. doi: 10.3390/s18093042.
5
An Indoor Obstacle Detection System Using Depth Information and Region Growth.一种利用深度信息和区域生长的室内障碍物检测系统。
Sensors (Basel). 2015 Oct 23;15(10):27116-41. doi: 10.3390/s151027116.