Ito Masato, Noda Kuniaki, Hoshino Yukiko, Tani Jun
Sony Intelligence Dynamics Laboratories, Inc., Takanawa Muse Building 4F, 3-14-13 Higashigotanda, Tokyo 141-0022, Japan.
Neural Netw. 2006 Apr;19(3):323-37. doi: 10.1016/j.neunet.2006.02.007. Epub 2006 Apr 17.
This study presents experiments on the learning of object handling behaviors by a small humanoid robot using a dynamic neural network model, the recurrent neural network with parametric bias (RNNPB). The first experiment showed that after the robot learned different types of ball handling behaviors using human direct teaching, the robot was able to generate adequate ball handling motor sequences situated to the relative position between the robot's hands and the ball. The same scheme was applied to a block handling learning task where it was shown that the robot can switch among learned different block handling sequences, situated to the ways of interaction by human supporters. Our analysis showed that entrainment of the internal memory structures of the RNNPB through the interactions of the objects and the human supporters are the essential mechanisms for those observed situated behaviors of the robot.
本研究展示了一个小型人形机器人使用动态神经网络模型——带参数偏差的递归神经网络(RNNPB)学习物体操作行为的实验。第一个实验表明,在机器人通过人类直接教学学习了不同类型的球操作行为后,它能够根据机器人的手与球之间的相对位置生成适当的球操作运动序列。相同的方案被应用于一个积木操作学习任务,结果表明机器人可以根据人类辅助者的交互方式,在已学习的不同积木操作序列之间进行切换。我们的分析表明,通过物体与人类辅助者的交互来带动RNNPB的内部记忆结构,是机器人那些观察到的情境行为的关键机制。