Autonomous Terrain Classification With Co- and Self-Training Approach

被引:72
作者
Otsu, Kyohei [1 ]
Ono, Masahiro [2 ]
Fuchs, Thomas J. [3 ]
Baldwin, Ian [2 ]
Kubota, Takashi [4 ]
机构
[1] Univ Tokyo, Dept Elect Engn & Informat Syst, Bunkyo Ku, Tokyo 1138656, Japan
[2] CALTECH, Jet Prop Lab, 4800 Oak Grove Dr, Pasadena, CA 91109 USA
[3] Mem Sloan Kettering Canc Ctr, New York, NY 10065 USA
[4] Japan Aerosp Explorat Agcy, Inst Space & Aeronaut Sci, Sagamihara, Kanagawa 2525210, Japan
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2016年 / 1卷 / 02期
基金
日本学术振兴会; 美国国家航空航天局;
关键词
Visual Learning; Semantic Scene Understanding; Space Robotics;
D O I
10.1109/LRA.2016.2525040
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Identifying terrain type is crucial to safely operating planetary exploration rovers. Vision-based terrain classifiers, which are typically trained by thousands of labeled images using machine learning methods, have proven to be particularly successful. However, since planetary rovers are to boldly go where no one has gone before, training data are usually not available a priori; instead, rovers have to quickly learn from their own experiences in an early phase of surface operation. This research addresses the challenge by combining two key ideas. The first idea is to use both onboard imagery and vibration data, and let rovers learn from physical experiences through self-supervised learning. The underlying fact is that visually similar terrain may be disambiguated by mechanical vibrations. The second idea is to employ the co-and self-training approaches. The idea of co-training is to train two classifiers separately for vision and vibration data, and re-train them iteratively on each other's output. Meanwhile, the self-training approach, applied only to the vision-based classifier, re-trains the classifier on its own output. Both approaches essentially increase the amount of labels, hence enable the terrain classifiers to operate from a sparse training dataset. The proposed approach was validated with a four-wheeled test rover in Mars-analogous terrain, including bedrock, soil, and sand. The co-training setup based on support vector machines with color and wavelet-based features successfully estimated terrain types with 82% accuracy with only three labeled images.
引用
收藏
页码:814 / 819
页数:6
相关论文
共 22 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]   Learning and prediction of slip from visual information [J].
Angelova, Anelia ;
Matthies, Larry ;
Helmick, Daniel ;
Perona, Pietro .
JOURNAL OF FIELD ROBOTICS, 2007, 24 (03) :205-231
[3]   Autonomous Off-Road Navigation with End-to-End Learning for the LAGR Program [J].
Bajracharya, Max ;
Howard, Andrew ;
Matthies, Larry H. ;
Tang, Benyang ;
Turmon, Michael .
JOURNAL OF FIELD ROBOTICS, 2009, 26 (01) :3-25
[4]  
Blum A., 1998, Proceedings of the Eleventh Annual Conference on Computational Learning Theory, P92, DOI 10.1145/279943.279962
[5]   Vibration-based terrain classification for planetary exploration rovers [J].
Brooks, CA ;
Iagnemma, K .
IEEE TRANSACTIONS ON ROBOTICS, 2005, 21 (06) :1185-1191
[6]   Self-supervised terrain classification for planetary surface exploration rovers [J].
Brooks, Christopher A. ;
Iagnemma, Karl .
JOURNAL OF FIELD ROBOTICS, 2012, 29 (03) :445-468
[7]  
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
[8]   Frequency response method for terrain classification in autonomous ground vehicles [J].
DuPont, Edmond M. ;
Moore, Carl A. ;
Collins, Emmanuel G., Jr. ;
Coyle, Eric .
AUTONOMOUS ROBOTS, 2008, 24 (04) :337-347
[9]  
Geiger A, 2011, IEEE INT VEH SYM, P963, DOI 10.1109/IVS.2011.5940405
[10]  
Goldberg SB, 2002, AEROSP CONF PROC, P2025