Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs

被引:107
作者
Wang, Limin [1 ]
Guo, Sheng [2 ,3 ]
Huang, Weilin [2 ,4 ]
Xiong, Yuanjun [5 ]
Qiao, Yu [6 ,7 ]
机构
[1] ETH, Comp Vis Lab, CH-8092 Zurich, Switzerland
[2] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
[3] Univ Chinese Acad Sci, Shenzhen Coll Adv Technol, Beijing 518055, Peoples R China
[4] Univ Oxford, Visual Geometry Grp, Oxford OX1 2JD, England
[5] Chinese Univ Hong Kong, Dept Informat Engn, Hong Kong, Peoples R China
[6] Chinese Acad Sci, Shenzhen Inst Adv Technol, Guangdong Key Lab Comp Vis & Virtual Real, Shenzhen, Peoples R China
[7] Chinese Univ Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Scene recognition; large-scale recognition; multi-resolutions; disambiguation; convolutional neural network; REPRESENTATION;
D O I
10.1109/TIP.2017.2675339
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Convolutional neural networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity: 1) we exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category and 2) we utilize the knowledge of extra networks to produce a soft label for each image. Then, the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7%) and SUN397 (72.0%). We release the code and models at https://github.com/wanglimin/MRCNN-Scene-Recognition.
引用
收藏
页码:2055 / 2068
页数:14
相关论文
共 60 条
[1]  
[Anonymous], 2015, ICLR
[2]  
[Anonymous], 2015, ARXIV PREPRINT ARXIV
[3]  
[Anonymous], CORR
[4]  
[Anonymous], 2016, CORR
[5]  
[Anonymous], 2014, CORR
[6]  
[Anonymous], 2015, CORR
[7]  
[Anonymous], P CVPR
[8]  
[Anonymous], 2015, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2015.123
[9]  
[Anonymous], CORR
[10]  
[Anonymous], 2015, DISTILLING KNOWLEDGE