VFM: Visual Feedback Model for Robust Object Recognition

被引:0
|
作者
Chong Wang
Kai-Qi Huang
机构
[1] Institute of Automation,National Laboratory of Pattern Recognition
[2] Chinese Academy of Sciences,undefined
来源
Journal of Computer Science and Technology | 2015年 / 30卷
关键词
object recognition; object classification; object detection; visual feedback;
D O I
暂无
中图分类号
学科分类号
摘要
Object recognition, which consists of classification and detection, has two important attributes for robustness: 1) closeness: detection windows should be as close to object locations as possible, and 2) adaptiveness: object matching should be adaptive to object variations within an object class. It is difficult to satisfy both attributes using traditional methods which consider classification and detection separately; thus recent studies propose to combine them based on confidence contextualization and foreground modeling. However, these combinations neglect feature saliency and object structure, and biological evidence suggests that the feature saliency and object structure can be important in guiding the recognition from low level to high level. In fact, object recognition originates in the mechanism of “what” and “where” pathways in human visual systems. More importantly, these pathways have feedback to each other and exchange useful information, which may improve closeness and adaptiveness. Inspired by the visual feedback, we propose a robust object recognition framework by designing a computational visual feedback model (VFM) between classification and detection. In the “what” feedback, the feature saliency from classification is exploited to rectify detection windows for better closeness; while in the “where” feedback, object parts from detection are used to match object structure for better adaptiveness. Experimental results show that the “what” and “where” feedback is effective to improve closeness and adaptiveness for object recognition, and encouraging improvements are obtained on the challenging PASCAL VOC 2007 dataset.
引用
收藏
页码:325 / 339
页数:14
相关论文
共 50 条
  • [41] A Neurocognitive Approach to Expertise in Visual Object Recognition
    Harel, Assaf
    FOUNDATIONS OF AUGMENTED COGNITION, AC 2015, 2015, 9183 : 426 - 436
  • [42] The role of action representations in visual object recognition
    Hannah Barbara Helbig
    Markus Graf
    Markus Kiefer
    Experimental Brain Research, 2006, 174 : 221 - 228
  • [43] WEIGHTED BAG OF VISUAL WORDS FOR OBJECT RECOGNITION
    San Biagio, Marco
    Bazzani, Loris
    Cristani, Marco
    Murino, Vittorio
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 2734 - 2738
  • [44] A Visual Feedback Model-Free Design for Robust Tracking of Nonholonomic Mobile Robots
    Chen, Hui
    Chen, Hua
    Wang, Yibin
    Yang, Fang
    PROCEEDINGS OF 2016 CHINESE INTELLIGENT SYSTEMS CONFERENCE, VOL II, 2016, 405 : 607 - 618
  • [45] Omnidirectional Image Stabilization for Visual Object Recognition
    Torii, Akihiko
    Havlena, Michal
    Pajdla, Tomas
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2011, 91 (02) : 157 - 174
  • [46] Visual-Tactile Fusion for Object Recognition
    Liu, Huaping
    Yu, Yuanlong
    Sun, Fuchun
    Gu, Jason
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2017, 14 (02) : 996 - 1008
  • [47] The role of action representations in visual object recognition
    Helbig, Hannah Barbara
    Graf, Markus
    Kiefer, Markus
    EXPERIMENTAL BRAIN RESEARCH, 2006, 174 (02) : 221 - 228
  • [48] Multiple spatial pooling for visual object recognition
    Huang, Yongzhen
    Wu, Zifeng
    Wang, Liang
    Song, Chunfeng
    NEUROCOMPUTING, 2014, 129 : 225 - 231
  • [49] Object Recognition With an Elastic Net-Regularized Hierarchical MAX Model of the Visual Cortex
    Alameer, Ali
    Ghazaei, Ghazal
    Degenaar, Patrick
    Chambers, Jonathon A.
    Nazarpour, Kianoush
    IEEE SIGNAL PROCESSING LETTERS, 2016, 23 (08) : 1062 - 1066
  • [50] Visual feedback control of quadrotor by object detection in movies
    Lu Shao
    Fusaomi Nagata
    Hiroaki Ochi
    Akimasa Otsuka
    Takeshi Ikeda
    Keigo Watanabe
    Maki K. Habib
    Artificial Life and Robotics, 2020, 25 : 488 - 494