Castro Maria Claudia F, Pinheiro Wellington C, Rigolin Glauco
Electrical Engineering Department, Centro Universitário FEI, São Bernardo do Cambo, Brazil.
Mechanical Engineering Department, Centro Universitário FEI, São Bernardo do Cambo, Brazil.
Front Neurorobot. 2022 Jan 24;15:751282. doi: 10.3389/fnbot.2021.751282. eCollection 2021.
This study presents a new approach for an sEMG hand prosthesis based on a 3D printed model with a fully embedded computer vision (CV) system in a hybrid version. A modified 5-layer Smaller Visual Geometry Group (VGG) convolutional neural network (CNN), running on a Raspberry Pi 3 microcomputer connected to a webcam, recognizes the shape of daily use objects, and defines the pattern of the prosthetic grasp/gesture among five classes: Palmar Neutral, Palmar Pronated, Tripod Pinch, Key Grasp, and Index Finger Extension. Using the Myoware board and a finite state machine, the user's intention, depicted by a myoelectric signal, starts the process, photographing the object, proceeding to the grasp/gesture classification, and commands the prosthetic motors to execute the movements. Keras software was used as an application programming interface and TensorFlow as numerical computing software. The proposed system obtained 99% accuracy, 97% sensitivity, and 99% specificity, showing that the CV system is a promising technology to assist the definition of the grasp pattern in prosthetic devices.
本研究提出了一种基于3D打印模型的新型表面肌电图(sEMG)手部假肢方法,该模型采用混合版本,完全嵌入计算机视觉(CV)系统。一种经过改进的5层小型视觉几何组(VGG)卷积神经网络(CNN),运行在连接到网络摄像头的Raspberry Pi 3微型计算机上,可识别日常使用物体的形状,并在五个类别中定义假肢抓握/手势模式:手掌中立、手掌旋前、三脚架捏、钥匙抓握和食指伸展。使用Myoware板和有限状态机,由肌电信号描绘的用户意图启动该过程,拍摄物体,进行抓握/手势分类,并命令假肢电机执行动作。Keras软件用作应用程序编程接口,TensorFlow用作数值计算软件。所提出的系统获得了99%的准确率、97%的灵敏度和99%的特异性,表明CV系统是一种有前途的技术,可辅助定义假肢装置中的抓握模式。