Faculty of Mechatronics, Armament and Aerospace, Military University of Technology, Kaliskiego 2 Street, 00-908 Warsaw, Poland.
Sensors (Basel). 2020 Nov 7;20(21):6358. doi: 10.3390/s20216358.
The paper presents the possibility of using the Kinect v2 module to control an industrial robot by means of gestures and voice commands. It describes the elements of creating software for off-line and on-line robot control. The application for the Kinect module was developed in the C# language in the Visual Studio environment, while the industrial robot control program was developed in the RAPID language in the RobotStudio environment. The development of a two-threaded application in the RAPID language allowed separating two independent tasks for the IRB120 robot. The main task of the robot is performed in Thread No. 1 (responsible for movement). Simultaneously, Thread No. 2 ensures continuous communication with the Kinect system and provides information about the gesture and voice commands in real time without any interference in Thread No. 1. The applied solution allows the robot to work in industrial conditions without the negative impact of the communication task on the time of the robot's work cycles. Thanks to the development of a digital twin of the real robot station, tests of proper application functioning in off-line mode (without using a real robot) were conducted. The obtained results were verified on-line (on the real test station). Tests of the correctness of gesture recognition were carried out, and the robot recognized all programmed gestures. Another test carried out was the recognition and execution of voice commands. A difference in the time of task completion between the actual and virtual station was noticed; the average difference was 0.67 s. The last test carried out was to examine the impact of interference on the recognition of voice commands. With a 10 dB difference between the command and noise, the recognition of voice commands was equal to 91.43%. The developed computer programs have a modular structure, which enables easy adaptation to process requirements.
本文提出了一种使用 Kinect v2 模块通过手势和语音命令来控制工业机器人的可能性。它描述了创建离线和在线机器人控制软件的要素。Kinect 模块的应用程序是在 Visual Studio 环境中使用 C# 语言开发的,而工业机器人控制程序是在 RobotStudio 环境中使用 RAPID 语言开发的。在 RAPID 语言中开发的双线程应用程序允许为 IRB120 机器人分离两个独立的任务。机器人的主要任务在线程 No.1 中执行(负责运动)。同时,线程 No.2 确保与 Kinect 系统的连续通信,并实时提供有关手势和语音命令的信息,而不会对线程 No.1 造成任何干扰。应用的解决方案允许机器人在工业条件下工作,而不会对机器人工作周期时间产生通信任务的负面影响。由于开发了真实机器人站的数字孪生体,可以在离线模式(不使用真实机器人)下进行适当应用功能的测试。在线(在真实测试站上)进行了测试,以验证获得的结果。进行了手势识别正确性的测试,机器人识别了所有编程的手势。另一个进行的测试是语音命令的识别和执行。注意到实际和虚拟站之间完成任务的时间差异;平均差异为 0.67 秒。最后进行的测试是检查干扰对语音命令识别的影响。在命令和噪声之间有 10dB 的差异时,语音命令的识别率等于 91.43%。开发的计算机程序具有模块化结构,这使得易于适应处理要求。