Learning obstacle avoidance with an operant behavior model

被引:17
作者
Gutnisky, DA
Zanutto, BS
机构
[1] FI Univ Buenos Aires, Inst Ingn Biomed, RA-1063 Buenos Aires, DF, Argentina
[2] Consejo Nacl Invest Cient & Tecn, Inst Biol & Med Expt, RA-1033 Buenos Aires, DF, Argentina
关键词
operant learning; neural networks; reinforcement learning; Q-Learning; animals; artificial neural networks;
D O I
10.1162/106454604322875913
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Artificial intelligence researchers have been attracted by the idea of having robots learn how to accomplish a task, rather than being told explicitly. Reinforcement learning has been proposed as an appealing framework to be used in controlling mobile agents. Robot learning research, as well as research in biological systems, face many similar problems in order to display high flexibility in performing a variety of tasks. In this work, the controlling of a vehicle in an avoidance task by a previously developed operant learning model (a form of animal learning) is studied. An environment in which a mobile robot with proximity sensors has to minimize the punishment for colliding against obstacles is simulated. The results were compared with the Q-Learning algorithm, and the proposed model had better performance. in this way a new artificial intelligence agent inspired by neurobiology, psychology, and ethology research is proposed.
引用
收藏
页码:65 / 81
页数:17
相关论文
共 35 条