Cui Jianwei, Yan Bingyan
Institute of Instrument Science and Engineering, Southeast University, Nanjing 210096, China.
Biomimetics (Basel). 2024 Dec 19;9(12):775. doi: 10.3390/biomimetics9120775.
The realization of hand function reengineering using a manipulator is a research hotspot in the field of robotics. In this paper, we propose a multimodal perception and control method for a robotic hand to assist the disabled. The movement of the human hand can be divided into two parts: the coordination of the posture of the fingers, and the coordination of the timing of grasping and releasing objects. Therefore, we first used a pinhole camera to construct a visual device suitable for finger mounting, and preclassified the shape of the object based on YOLOv8; then, a filtering process using multi-frame synthesized point cloud data from miniature 2D Lidar, and DBSCAN algorithm clustering objects and the DTW algorithm, was proposed to further identify the cross-sectional shape and size of the grasped part of the object and realize control of the robot's grasping gesture; finally, a multimodal perception and control method for prosthetic hands was proposed. To control the grasping attitude, a fusion algorithm based on information of upper limb motion state, hand position, and lesser toe haptics was proposed to realize control of the robotic grasping process with a human in the ring. The device designed in this paper does not contact the human skin, does not produce discomfort, and the completion rate of the grasping process experiment reached 91.63%, which indicates that the proposed control method has feasibility and applicability.
利用机械手实现手部功能再造是机器人领域的研究热点。本文提出了一种用于机器人手辅助残疾人的多模态感知与控制方法。人手的运动可分为两部分:手指姿势的协调,以及抓取和释放物体时机的协调。因此,我们首先使用针孔相机构建了一种适合手指安装的视觉装置,并基于YOLOv8对物体形状进行预分类;然后,提出了一种利用微型二维激光雷达的多帧合成点云数据进行滤波处理,并结合DBSCAN算法对物体进行聚类以及DTW算法,进一步识别物体被抓取部分的横截面形状和尺寸,实现对机器人抓握姿态的控制;最后,提出了一种用于假肢手的多模态感知与控制方法。为了控制抓握姿态,提出了一种基于上肢运动状态信息、手部位置和小趾触觉的融合算法,以实现人机协作下机器人抓握过程的控制。本文设计的装置不接触人体皮肤,不会产生不适感,抓握过程实验的完成率达到91.63%,表明所提出的控制方法具有可行性和适用性。