Learning Visual Affordances of Objects and Tools through Autonomous Robot Exploration

被引:0
作者
Goncalves, Afonso [1 ]
Saponaro, Giovanni [1 ]
Jamone, Lorenzo [1 ]
Bernardino, Alexandre [1 ]
机构
[1] Univ Lisbon, Inst Super Tecn, Inst Syst & Robot, P-1699 Lisbon, Portugal
来源
2014 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC) | 2014年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Endowing artificial agents with the ability of predicting the consequences of their own actions and efficiently planning their behaviors based on such predictions is a fundamental challenge both in artificial intelligence and robotics. A computationally practical yet powerful way to model this knowledge, referred as objects affordances, is through probabilistic dependencies between actions, objects and effects: this allows to make inferences across these dependencies, such as i) predicting the effects of an action over an object, or ii) selecting the best action from a repertoire in order to obtain a desired effect over an object. We propose a probabilistic model capable of learning the mutual interaction between objects in complex tasks that involve manipulation, where one of the objects plays an active tool role while being grasped and used (e. g., a hammer) while another item is passively acted upon (e. g., a nail). We consider visual affordances, meaning that we do not model object labels or categories; instead, we compute a set of visual features that represent geometrical properties (e. g., convexity, roundness), which allows to generalize previously-acquired knowledge to new objects. We describe an experiment in which a simulated humanoid robot learns an affordance model by autonomously exploring different actions with the objects present in a playground scenario. We report results showing that the robot is able to i) learn meaningful relationships between actions, tools, other objects and effects, and to ii) exploit the acquired knowledge to make predictions and take optimal decisions.
引用
收藏
页码:128 / 133
页数:6
相关论文
共 18 条
[1]  
[Anonymous], WORKSH PERF METR INT
[2]   Grounding vision through experimental manipulation [J].
Fitzpatrick, P ;
Metta, G .
PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2003, 361 (1811) :2165-2185
[3]   Action recognition in the premotor cortex [J].
Gallese, V ;
Fadiga, L ;
Fogassi, L ;
Rizzolatti, G .
BRAIN, 1996, 119 :593-609
[4]  
Gibson James J., 1979, ECOLOGICAL APPROACH
[5]   A Survey of the Ontogeny of Tool Use: From Sensorimotor Experience to Planning [J].
Guerin, Frank ;
Kruger, Norbert ;
Kraft, Dirk .
IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT, 2013, 5 (01) :18-45
[6]   Bayesian learning of tool affordances based on generalization of functional feature to estimate effects of unseen tools [J].
Jain, Raghvendra ;
Inamura, Tetsunari .
ARTIFICIAL LIFE AND ROBOTICS, 2013, 18 (1-2) :95-103
[7]   Object-Action Complexes: Grounded abstractions of sensory-motor processes [J].
Kruger, Norbert ;
Geib, Christopher ;
Piater, Justus ;
Petrick, Ronald ;
Steedman, Mark ;
Woergoetter, Florentin ;
Ude, Ales ;
Asfour, Tamim ;
Kraft, Dirk ;
Omrcen, Damir ;
Agostini, Alejandro ;
Dillmann, Ruediger .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2011, 59 (10) :740-757
[8]  
Lopes M., 2007, IEEE RSJ INT C INT R
[9]  
Metta G., 2006, INT J ADV ROBOTICS S
[10]   The iCub humanoid robot: An open-systems platform for research in cognitive development [J].
Metta, Giorgio ;
Natale, Lorenzo ;
Nori, Francesco ;
Sandini, Giulio ;
Vernon, David ;
Fadiga, Luciano ;
von Hofsten, Claes ;
Rosander, Kerstin ;
Lopes, Manuel ;
Santos-Victor, Jose ;
Bernardino, Alexandre ;
Montesano, Luis .
NEURAL NETWORKS, 2010, 23 (8-9) :1125-1134