• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

革新机器人拆垛:采用自适应3D机器视觉和RGB-D成像的人工智能增强包裹检测技术实现自动卸载

Revolutionizing Robotic Depalletizing: AI-Enhanced Parcel Detecting with Adaptive 3D Machine Vision and RGB-D Imaging for Automated Unloading.

作者信息

Kim Seongje, Truong Van-Doi, Lee Kwang-Hee, Yoon Jonghun

机构信息

Department of Mechanical Design Engineering, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea.

BK21 FOUR ERICA-ACE Center, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Republic of Korea.

出版信息

Sensors (Basel). 2024 Feb 24;24(5):1473. doi: 10.3390/s24051473.

DOI:10.3390/s24051473
PMID:38475009
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10935264/
Abstract

Detecting parcels accurately and efficiently has always been a challenging task when unloading from trucks onto conveyor belts because of the diverse and complex ways in which parcels are stacked. Conventional methods struggle to quickly and accurately classify the various shapes and surface patterns of unordered parcels. In this paper, we propose a parcel-picking surface detection method based on deep learning and image processing for the efficient unloading of diverse and unordered parcels. Our goal is to develop a systematic image processing algorithm that emphasises the boundaries of parcels regardless of their shape, pattern, or layout. The core of the algorithm is the utilisation of RGB-D technology for detecting the primary boundary lines regardless of obstacles such as adhesive labels, tapes, or parcel surface patterns. For cases where detecting the boundary lines is difficult owing to narrow gaps between parcels, we propose using deep learning-based boundary line detection through the You Only Look at Coefficients (YOLACT) model. Using image segmentation techniques, the algorithm efficiently predicts boundary lines, enabling the accurate detection of irregularly sized parcels with complex surface patterns. Furthermore, even for rotated parcels, we can extract their edges through complex mathematical operations using the depth values of the specified position, enabling the detection of the wider surfaces of the rotated parcels. Finally, we validate the accuracy and real-time performance of our proposed method through various case studies, achieving mAP (50) values of 93.8% and 90.8% for randomly sized and rotationally covered boxes with diverse colours and patterns, respectively.

摘要

当从卡车上卸载包裹到传送带上时,由于包裹堆叠方式多样且复杂,准确而高效地检测包裹一直是一项具有挑战性的任务。传统方法难以快速且准确地对无序包裹的各种形状和表面图案进行分类。在本文中,我们提出一种基于深度学习和图像处理的包裹拾取表面检测方法,用于高效卸载各种无序包裹。我们的目标是开发一种系统的图像处理算法,该算法能突出包裹的边界,而不考虑其形状、图案或布局。该算法的核心是利用RGB-D技术来检测主要边界线,而不受诸如粘贴标签、胶带或包裹表面图案等障碍物的影响。对于因包裹之间间隙狭窄而难以检测边界线的情况,我们建议通过“你只看系数”(YOLACT)模型使用基于深度学习的边界线检测方法。利用图像分割技术,该算法能有效地预测边界线,从而能够准确检测具有复杂表面图案的不规则尺寸包裹。此外,即使对于旋转的包裹,我们也可以使用指定位置的深度值通过复杂的数学运算来提取其边缘,从而检测旋转包裹的更宽表面。最后,我们通过各种案例研究验证了我们提出的方法的准确性和实时性能,对于随机尺寸且有不同颜色和图案的旋转覆盖盒子,分别实现了93.8%和90.8%的平均精度均值(mAP,50)值。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/bc7f8cc62cea/sensors-24-01473-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/db8f6f3c2c8a/sensors-24-01473-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/370ed697f827/sensors-24-01473-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/171bd7a4b586/sensors-24-01473-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/4133e0bf05cd/sensors-24-01473-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/bc672a6b01bc/sensors-24-01473-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/3dbde7028559/sensors-24-01473-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/12469cdd7802/sensors-24-01473-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/c555baeec1e5/sensors-24-01473-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/3fd3ae5c5ab3/sensors-24-01473-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/1e1d31bd49aa/sensors-24-01473-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/2df89203050a/sensors-24-01473-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/f39db3d48c04/sensors-24-01473-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/2152fa82302b/sensors-24-01473-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/21b736485735/sensors-24-01473-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/bc7f8cc62cea/sensors-24-01473-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/db8f6f3c2c8a/sensors-24-01473-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/370ed697f827/sensors-24-01473-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/171bd7a4b586/sensors-24-01473-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/4133e0bf05cd/sensors-24-01473-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/bc672a6b01bc/sensors-24-01473-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/3dbde7028559/sensors-24-01473-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/12469cdd7802/sensors-24-01473-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/c555baeec1e5/sensors-24-01473-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/3fd3ae5c5ab3/sensors-24-01473-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/1e1d31bd49aa/sensors-24-01473-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/2df89203050a/sensors-24-01473-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/f39db3d48c04/sensors-24-01473-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/2152fa82302b/sensors-24-01473-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/21b736485735/sensors-24-01473-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/83df/10935264/bc7f8cc62cea/sensors-24-01473-g015.jpg

相似文献

1
Revolutionizing Robotic Depalletizing: AI-Enhanced Parcel Detecting with Adaptive 3D Machine Vision and RGB-D Imaging for Automated Unloading.革新机器人拆垛:采用自适应3D机器视觉和RGB-D成像的人工智能增强包裹检测技术实现自动卸载
Sensors (Basel). 2024 Feb 24;24(5):1473. doi: 10.3390/s24051473.
2
Visual Sorting of Express Parcels Based on Multi-Task Deep Learning.基于多任务深度学习的快递包裹可视化分拣。
Sensors (Basel). 2020 Nov 27;20(23):6785. doi: 10.3390/s20236785.
3
Depth Image-Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking.基于深度图像的视觉引导机器人分拣中无纹理平面物体抓取规划的深度学习。
Sensors (Basel). 2020 Jan 28;20(3):706. doi: 10.3390/s20030706.
4
[Farmland parcel extraction based on high resolution remote sensing image].基于高分辨率遥感影像的农田地块提取
Guang Pu Xue Yu Guang Pu Fen Xi. 2009 Oct;29(10):2703-7.
5
ViT-MAENB7: An innovative breast cancer diagnosis model from 3D mammograms using advanced segmentation and classification process.基于先进分割和分类流程的 3D 乳腺 X 线摄影的乳腺癌诊断新模型:ViT-MAENB7。
Comput Methods Programs Biomed. 2024 Dec;257:108373. doi: 10.1016/j.cmpb.2024.108373. Epub 2024 Aug 23.
6
FusionVision: A Comprehensive Approach of 3D Object Reconstruction and Segmentation from RGB-D Cameras Using YOLO and Fast Segment Anything.融合视觉:一种使用YOLO和快速分割一切模型从RGB-D相机进行3D物体重建与分割的综合方法。
Sensors (Basel). 2024 Apr 30;24(9):2889. doi: 10.3390/s24092889.
7
Research on high-speed classification and location algorithm for logistics parcels based on a monocular camera.基于单目相机的物流包裹高速分类与定位算法研究
Sci Rep. 2024 Jul 10;14(1):15901. doi: 10.1038/s41598-024-66941-x.
8
Edge Preserving and Multi-Scale Contextual Neural Network for Salient Object Detection.边缘保持和多尺度上下文神经网络的显著目标检测。
IEEE Trans Image Process. 2018;27(1):121-134. doi: 10.1109/TIP.2017.2756825.
9
Real-Time 3D Reconstruction Method Based on Monocular Vision.基于单目视觉的实时 3D 重建方法。
Sensors (Basel). 2021 Sep 2;21(17):5909. doi: 10.3390/s21175909.
10
Detecting flowering phenology in oil seed rape parcels with Sentinel-1 and -2 time series.利用哨兵-1和哨兵-2时间序列检测油菜地块的开花物候。
Remote Sens Environ. 2020 Mar 15;239:111660. doi: 10.1016/j.rse.2020.111660.

引用本文的文献

1
Machine Vision-Assisted Design of End Effector Pose in Robotic Mixed Depalletizing of Heterogeneous Cargo.异构货物机器人混合卸托盘过程中末端执行器姿态的机器视觉辅助设计
Sensors (Basel). 2025 Feb 13;25(4):1137. doi: 10.3390/s25041137.
2
Research on high-speed classification and location algorithm for logistics parcels based on a monocular camera.基于单目相机的物流包裹高速分类与定位算法研究
Sci Rep. 2024 Jul 10;14(1):15901. doi: 10.1038/s41598-024-66941-x.

本文引用的文献

1
Visual Sorting of Express Parcels Based on Multi-Task Deep Learning.基于多任务深度学习的快递包裹可视化分拣。
Sensors (Basel). 2020 Nov 27;20(23):6785. doi: 10.3390/s20236785.
2
YOLACT++ Better Real-Time Instance Segmentation.YOLACT++:更好的实时实例分割
IEEE Trans Pattern Anal Mach Intell. 2022 Feb;44(2):1108-1121. doi: 10.1109/TPAMI.2020.3014297. Epub 2022 Jan 7.
3
Object Detection With Deep Learning: A Review.基于深度学习的目标检测研究综述。
IEEE Trans Neural Netw Learn Syst. 2019 Nov;30(11):3212-3232. doi: 10.1109/TNNLS.2018.2876865. Epub 2019 Jan 28.
4
A computational approach to edge detection.一种基于计算的边缘检测方法。
IEEE Trans Pattern Anal Mach Intell. 1986 Jun;8(6):679-98.
5
Contour detection and hierarchical image segmentation.轮廓检测和层次图像分割。
IEEE Trans Pattern Anal Mach Intell. 2011 May;33(5):898-916. doi: 10.1109/TPAMI.2010.161.