Understanding Tools: Task-Oriented Object Modeling, Learning and Recognition

被引:0
作者
Zhu, Yixin [1 ]
Zhao, Yibiao [1 ]
Zhu, Song-Chun [1 ]
机构
[1] Univ Calif Los Angeles, Ctr Vis Cognit Learning & Art, Los Angeles, CA 90095 USA
来源
2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2015年
关键词
AFFORDANCES; GEOMETRY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a new framework - task-oriented modeling, learning and recognition which aims at understanding the underlying functions, physics and causality in using objects as "tools". Given a task, such as, cracking a nut or painting a wall, we represent each object, e.g. a hammer or brush, in a generative spatio-temporal representation consisting of four components: i) an affordance basis to be grasped by hand; ii) a functional basis to act on a target object (the nut), iii) the imagined actions with typical motion trajectories; and iv) the underlying physical concepts, e.g. force, pressure, etc. In a learning phase, our algorithm observes only one RGB-D video, in which a rational human picks up one object (i. e. tool) among a number of candidates to accomplish the task. From this example, our algorithm learns the essential physical concepts in the task (e.g. forces in cracking nuts). In an inference phase, our algorithm is given a new set of objects (daily objects or stones), and picks the best choice available together with the inferred affordance basis, functional basis, imagined human actions (sequence of poses), and the expected physical quantity that it will produce. From this new perspective, any objects can be viewed as a hammer or a shovel, and object recognition is not merely memorizing typical appearance examples for each category but reasoning the physical mechanisms in various tasks to achieve generalization.
引用
收藏
页码:2855 / 2864
页数:10
相关论文
共 52 条
[21]   Visual object-action recognition: Inferring object affordances from human demonstration [J].
Kjellstrom, Hedvig ;
Romero, Javier ;
Kragic, Danica .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2011, 115 (01) :81-90
[22]   Cortical networks related to human use of tools [J].
Lewis, James W. .
NEUROSCIENTIST, 2006, 12 (03) :211-231
[23]   Holistic Scene Understanding for 3D Object Detection with RGBD cameras [J].
Lin, Dahua ;
Fidler, Sanja ;
Urtasun, Raquel .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :1417-1424
[24]  
McGrew William C., 1992, Chimpanzee material culture: implications for human evolution
[25]   Learning object affordances: From sensory-motor coordination to imitation [J].
Montesano, Luis ;
Lopes, Manuel ;
Bernardino, Alexandre ;
Santos-Victor, Jose .
IEEE TRANSACTIONS ON ROBOTICS, 2008, 24 (01) :15-26
[26]  
Myers A., 2015, IEEE INT C ROB AUT I
[27]  
Myers A., 2014, WORKSH VIS MEETS COG
[28]   Merging particle filter for sequential data assimilation [J].
Nakano, S. ;
Ueno, G. ;
Higuchi, T. .
NONLINEAR PROCESSES IN GEOPHYSICS, 2007, 14 (04) :395-408
[29]   KinectFusion: Real-Time Dense Surface Mapping and Tracking [J].
Newcombe, Richard A. ;
Izadi, Shahram ;
Hilliges, Otmar ;
Molyneaux, David ;
Kim, David ;
Davison, Andrew J. ;
Kohli, Pushmeet ;
Shotton, Jamie ;
Hodges, Steve ;
Fitzgibbon, Andrew .
2011 10TH IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR), 2011, :127-136
[30]   Grasping the Affordances, Understanding the Reasoning: Toward a Dialectical Theory of Human Tool Use [J].
Osiurak, Francois ;
Jarry, Christophe ;
Le Gall, Didier .
PSYCHOLOGICAL REVIEW, 2010, 117 (02) :517-540