Suppr超能文献

使用操作性行为模型学习避障

Learning obstacle avoidance with an operant behavior model.

作者信息

Gutnisky D A, Zanutto B S

机构信息

Instituto de Ingeniera de i Biomédica, FI-Universidad de Buenos Aires Paseo Colón 850, CP 1063, Buenos Aires, Argentina.

出版信息

Artif Life. 2004 Winter;10(1):65-81. doi: 10.1162/106454604322875913.

Abstract

Artificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. In this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.

摘要

人工智能研究人员被让机器人学习如何完成任务而非被明确告知这一想法所吸引。强化学习已被提议作为用于控制移动智能体的一个有吸引力的框架。机器人学习研究以及生物系统研究在执行各种任务时为展现出高灵活性面临许多相似问题。在这项工作中,研究了通过先前开发的操作性学习模型(一种动物学习形式)在避障任务中对车辆的控制。模拟了一个环境,在该环境中配备接近传感器的移动机器人必须将与障碍物碰撞的惩罚降至最低。将结果与Q学习算法进行了比较,所提出的模型表现更好。通过这种方式,提出了一种受神经生物学、心理学和行为学研究启发的新型人工智能智能体。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验