Review of Visual Attention Detection

被引:0
作者
Wang W.-G. [1 ]
Shen J.-B. [1 ]
Jia Y.-D. [1 ]
机构
[1] Beijing Laboratory of Intelligent Information Technology, Beijing Institute of Technology, Beijing
来源
Ruan Jian Xue Bao/Journal of Software | 2019年 / 30卷 / 02期
基金
中国国家自然科学基金;
关键词
Eye fixation prediction; Salient object detection; Visual attention; Visual saliency;
D O I
10.13328/j.cnki.jos.005636
中图分类号
学科分类号
摘要
Humans have ability to quickly select a subset of the visual input and allocate processing resources to those visually important regions. In computer vision community, understanding and emulating such attention mechanism of the human visual system has attracted much attention from the researchers and shown a wide range of applications. More recently, with the ever increasing computational power and availability of large-scale saliency datasets, deep learning has become a popular tool for modeling visual attention. This review includes the recent advances in visual attention modeling, including fixation prediction and salient object detection. It also discusses popular visual attention benchmarks and various evaluation metrics. The emphasis of this review is both on the deep learning based studies and the represented non-deep learning models. Extensive experiments are also performed on various benchmarks for evaluating the performance of those visual attention models. In the end, the review highlights current research trends and provides insight into the future direction. © Copyright 2019, Institute of Software, the Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:416 / 439
页数:23
相关论文
共 118 条
  • [1] Koch K., McLean J., Segev R., Freed M.A., Berry M.J., Balasubramanian V., Sterling P., How much the eye tells the brain, Current Biology, 16, 14, pp. 1428-1434, (2006)
  • [2] Borji A., Itti L., State-of-the-art in visual attention modeling, IEEE Trans. on Pattern Analysis and Machine Intelligence, 35, 1, pp. 185-207, (2013)
  • [3] Carrasco M., Visual attention: The past 25 years, Vision Research, 51, 13, pp. 1484-1525, (2011)
  • [4] Connor C., Egeth H., Yantis S., Visual attention: Bottom-up versus top-down, Current Biology, 14, 19, pp. 850-852, (2004)
  • [5] Rutishauser U., Walther D., Koch C., Perona P., Is bottom-up attention useful for object recognition, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, (2004)
  • [6] Borji A., Itti L., Scene classification with a sparse set of salient regions, Proc. of the IEEE Int'l Conf. on Robotics and Automation, pp. 1902-1908, (2011)
  • [7] Zhang D.W., Han J.W., Jiang L., Ye S.M., Chang X.J., Revealing event saliency in unconstrained video collection, IEEE Trans. on Image Processing, 26, 4, pp. 1746-1758, (2017)
  • [8] Koch C., Ullman S., Shifts in selective visual attention: towards the underlying neural circuitry, Human Neurobiology, 4, 4, pp. 219-227, (1985)
  • [9] Itti L., Koch C., Niebur E., A model of saliency-based visual attention for rapid scene analysis, IEEE Trans. on Pattern Analysis and Machine Intelligence, 20, 11, pp. 1254-1259, (1998)
  • [10] Treisman A.M., Gelade G., A feature-integration theory of attention, Cognitive Psychology, 12, 1, pp. 97-136, (1980)