• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于主动双目视觉传感器的点云数据驱动货盘位姿估计方法。

A Point Cloud Data-Driven Pallet Pose Estimation Method Using an Active Binocular Vision Sensor.

机构信息

College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China.

Noblelift Intelligent Equipment Co., Ltd., Huzhou 313100, China.

出版信息

Sensors (Basel). 2023 Jan 20;23(3):1217. doi: 10.3390/s23031217.

DOI:10.3390/s23031217
PMID:36772256
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9919204/
Abstract

Pallet pose estimation is one of the key technologies for automated fork pickup of driverless industrial trucks. Due to the complex working environment and the enormous amount of data, the existing pose estimation approaches cannot meet the working requirements of intelligent logistics equipment in terms of high accuracy and real time. A point cloud data-driven pallet pose estimation method using an active binocular vision sensor is proposed, which consists of point cloud preprocessing, Adaptive Gaussian Weight-based Fast Point Feature Histogram extraction and point cloud registration. The proposed method overcomes the shortcomings of traditional pose estimation methods, such as poor robustness, time consumption and low accuracy, and realizes the efficient and accurate estimation of pallet pose for driverless industrial trucks. Compared with traditional Fast Point Feature Histogram and Signature of Histogram of Orientation, the experimental results show that the proposed approach is superior to the above two methods, improving the accuracy by over 35% and reducing the feature extraction time by over 30%, thereby verifying the effectiveness and superiority of the proposed method.

摘要

托盘位姿估计是无人驾驶工业车辆自动叉取的关键技术之一。由于复杂的工作环境和大量的数据,现有的位姿估计方法在高精度和实时性方面无法满足智能物流设备的工作要求。提出了一种基于主动双目视觉传感器的点云数据驱动的托盘位姿估计方法,该方法由点云预处理、基于自适应高斯权重的快速点特征直方图提取和点云配准三部分组成。所提出的方法克服了传统位姿估计方法的鲁棒性差、耗时和精度低等缺点,实现了无人驾驶工业车辆托盘位姿的高效、准确估计。与传统的快速点特征直方图和方向直方图签名相比,实验结果表明,该方法优于上述两种方法,精度提高了 35%以上,特征提取时间减少了 30%以上,验证了所提方法的有效性和优越性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/2ea42bfba7fe/sensors-23-01217-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/86019b29e9dd/sensors-23-01217-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/290350546209/sensors-23-01217-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/979db30fe050/sensors-23-01217-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/d370cfb8d627/sensors-23-01217-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/2af468abb7af/sensors-23-01217-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/3edf9006c300/sensors-23-01217-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/be52eced850a/sensors-23-01217-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/8ac67f8552ce/sensors-23-01217-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/f11a108432d1/sensors-23-01217-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/0348d1cf02a8/sensors-23-01217-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/9b652eb53489/sensors-23-01217-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/85e5d026448b/sensors-23-01217-g012a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/d4356f1a290a/sensors-23-01217-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/505c25f32534/sensors-23-01217-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/e568ff39200b/sensors-23-01217-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/fe2527fefd02/sensors-23-01217-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/022bbd9689f7/sensors-23-01217-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/86377c661ddc/sensors-23-01217-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/b259bb83dce9/sensors-23-01217-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/7b5b30191815/sensors-23-01217-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/2ea42bfba7fe/sensors-23-01217-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/86019b29e9dd/sensors-23-01217-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/290350546209/sensors-23-01217-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/979db30fe050/sensors-23-01217-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/d370cfb8d627/sensors-23-01217-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/2af468abb7af/sensors-23-01217-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/3edf9006c300/sensors-23-01217-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/be52eced850a/sensors-23-01217-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/8ac67f8552ce/sensors-23-01217-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/f11a108432d1/sensors-23-01217-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/0348d1cf02a8/sensors-23-01217-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/9b652eb53489/sensors-23-01217-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/85e5d026448b/sensors-23-01217-g012a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/d4356f1a290a/sensors-23-01217-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/505c25f32534/sensors-23-01217-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/e568ff39200b/sensors-23-01217-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/fe2527fefd02/sensors-23-01217-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/022bbd9689f7/sensors-23-01217-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/86377c661ddc/sensors-23-01217-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/b259bb83dce9/sensors-23-01217-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/7b5b30191815/sensors-23-01217-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a04d/9919204/2ea42bfba7fe/sensors-23-01217-g021.jpg

相似文献

1
A Point Cloud Data-Driven Pallet Pose Estimation Method Using an Active Binocular Vision Sensor.基于主动双目视觉传感器的点云数据驱动货盘位姿估计方法。
Sensors (Basel). 2023 Jan 20;23(3):1217. doi: 10.3390/s23031217.
2
A Novel Pallet Detection Method for Automated Guided Vehicles Based on Point Cloud Data.一种基于点云数据的自动导引车新型托盘检测方法。
Sensors (Basel). 2022 Oct 20;22(20):8019. doi: 10.3390/s22208019.
3
ECPC-ICP: A 6D Vehicle Pose Estimation Method by Fusing the Roadside Lidar Point Cloud and Road Feature.ECPC-ICP:一种通过融合路边激光雷达点云和道路特征的六维车辆姿态估计方法。
Sensors (Basel). 2021 May 17;21(10):3489. doi: 10.3390/s21103489.
4
Real-Time Bucket Pose Estimation Based on Deep Neural Network and Registration Using Onboard 3D Sensor.基于深度神经网络和车载3D传感器配准的实时桶姿态估计
Sensors (Basel). 2023 Aug 5;23(15):6958. doi: 10.3390/s23156958.
5
Three-Dimensional Object Recognition and Registration for Robotic Grasping Systems Using a Modified Viewpoint Feature Histogram.使用改进的视点特征直方图的机器人抓取系统的三维物体识别与配准
Sensors (Basel). 2016 Nov 23;16(11):1969. doi: 10.3390/s16111969.
6
Point Cloud Based Relative Pose Estimation of a Satellite in Close Range.基于点云的近距卫星相对姿态估计
Sensors (Basel). 2016 Jun 4;16(6):824. doi: 10.3390/s16060824.
7
3D LiDAR Point Cloud Registration Based on IMU Preintegration in Coal Mine Roadways.基于 IMU 预积分的煤矿巷道 3D LiDAR 点云配准
Sensors (Basel). 2023 Mar 26;23(7):3473. doi: 10.3390/s23073473.
8
RGB-D-Based Pose Estimation of Workpieces with Semantic Segmentation and Point Cloud Registration.基于RGB-D的工件姿态估计:语义分割与点云配准
Sensors (Basel). 2019 Apr 19;19(8):1873. doi: 10.3390/s19081873.
9
PCKRF: Point Cloud Completion and Keypoint Refinement With Fusion Data for 6D Pose Estimation.PCKRF:用于6D姿态估计的融合数据点云补全与关键点细化
IEEE Trans Vis Comput Graph. 2024 Apr 17;PP. doi: 10.1109/TVCG.2024.3390122.
10
Rigid point cloud registration based on correspondence cloud for image-to-patient registration in image-guided surgery.基于对应云的刚性点云配准在图像引导手术中的图像到患者配准。
Med Phys. 2024 Jul;51(7):4554-4566. doi: 10.1002/mp.17243. Epub 2024 Jun 10.

引用本文的文献

1
High-Precision Positioning and Rotation Angle Estimation for a Target Pallet Based on BeiDou Navigation Satellite System and Vision.
Sensors (Basel). 2024 Aug 17;24(16):5330. doi: 10.3390/s24165330.

本文引用的文献

1
A Novel Pallet Detection Method for Automated Guided Vehicles Based on Point Cloud Data.一种基于点云数据的自动导引车新型托盘检测方法。
Sensors (Basel). 2022 Oct 20;22(20):8019. doi: 10.3390/s22208019.
2
An Accurate and Robust Method for Absolute Pose Estimation with UAV Using RANSAC.一种使用RANSAC的无人机绝对姿态估计的准确且稳健方法。
Sensors (Basel). 2022 Aug 8;22(15):5925. doi: 10.3390/s22155925.
3
Accuracy evaluation of surface registration algorithm using normal distribution transform in stereotactic body radiotherapy/radiosurgery: A phantom study.
基于正态分布变换的立体定向体部放疗/放射外科表面配准算法准确性评估:一项体模研究。
J Appl Clin Med Phys. 2022 Mar;23(3):e13521. doi: 10.1002/acm2.13521. Epub 2022 Jan 5.
4
LiDAR-IMU Time Delay Calibration Based on Iterative Closest Point and Iterated Sigma Point Kalman Filter.基于迭代最近点和迭代西格玛点卡尔曼滤波器的激光雷达-惯性测量单元时间延迟校准
Sensors (Basel). 2017 Mar 8;17(3):539. doi: 10.3390/s17030539.
5
Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration.Go-ICP:一种三维 ICP 点集配准的全局最优解。
IEEE Trans Pattern Anal Mach Intell. 2016 Nov;38(11):2241-2254. doi: 10.1109/TPAMI.2015.2513405. Epub 2015 Dec 30.