Denoising Auto-encoders for Learning of Objects and Tools Affordances in Continuous Space

被引:0
作者
Dehban, Atabak [1 ]
Jamone, Lorenzo [1 ]
Kampff, Adam R. [2 ,3 ]
Santos-Victor, Jose [1 ]
机构
[1] Univ Lisbon, Inst Super Tecn, Inst Syst & Robot, Lisbon, Portugal
[2] Champalimaud Ctr Unknown, Champalimaud Neurosci Programme, Lisbon, Portugal
[3] Sainsbury Wellcome Ctr Neural Circuits & Behav SW, London, England
来源
2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA) | 2016年
关键词
ROBOT;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The concept of affordances facilitates the encoding of relations between actions and effects in an environment centered around the agent. Such an interpretation has important impacts on several cognitive capabilities and manifestations of intelligence, such as prediction and planning. In this paper, a new framework based on denoising Auto-encoders (dA) is proposed which allows an agent to explore its environment and actively learn the affordances of objects and tools by observing the consequences of acting on them. The dA serves as a unified framework to fuse multi-modal data and retrieve an entire missing modality or a feature within a modality given information about other modalities. This work has two major contributions. First, since training the dA is done in continuous space, there will be no need to discretize the dataset and higher accuracies in inference can be achieved with respect to approaches in which data discretization is required (e.g. Bayesian networks). Second, by fixing the structure of the dA, knowledge can be added incrementally making the architecture particularly useful in online learning scenarios. Evaluation scores of real and simulated robotic experiments show improvements over previous approaches while the new model can be applied in a wider range of domains.
引用
收藏
页码:4866 / 4871
页数:6
相关论文
共 26 条
[1]  
[Anonymous], 2010, P PYTH SCI C
[2]   Extending the notion of affordance [J].
Caiani, Silvano Zipoli .
PHENOMENOLOGY AND THE COGNITIVE SCIENCES, 2014, 13 (02) :275-293
[3]  
Dogar MR, 2007, 2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9, P735
[4]  
Duchi J, 2011, J MACH LEARN RES, V12, P2121
[5]   Grounding vision through experimental manipulation [J].
Fitzpatrick, P ;
Metta, G .
PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2003, 361 (1811) :2165-2185
[6]  
Gibson J.J., 1979, The ecological approach to visual perception, P127, DOI [DOI 10.4324/9781315740218, 10.10.4324/9781315740218]
[7]  
Goncalves A., 2014, IEEE ICDL EPIROB
[8]  
Gonçalves A, 2014, IEEE INT CONF AUTON, P128, DOI 10.1109/ICARSC.2014.6849774
[9]   Perceiving, learning, and exploiting object affordances for autonomous pile manipulation [J].
Katz, Dov ;
Venkatraman, Arun ;
Kazemi, Moslem ;
Bagnell, J. Andrew ;
Stentz, Anthony .
AUTONOMOUS ROBOTS, 2014, 37 (04) :369-382
[10]   Efficient backprop [J].
LeCun, Y ;
Bottou, L ;
Orr, GB ;
Müller, KR .
NEURAL NETWORKS: TRICKS OF THE TRADE, 1998, 1524 :9-50