Where Does the Driver Look? Top-Down-Based Saliency Detection in a Traffic Driving Environment

被引:73
作者
Deng, Tao [1 ]
Yang, Kaifu [1 ]
Li, Yongjie [1 ]
Yan, Hongmei [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Life Sci & Technol, Key Lab Neuroinformat, Minist Educ, Chengdu 610054, Peoples R China
关键词
Traffic environment; bottom-up; top-down; visual attention; saliency detection; VISUAL-ATTENTION; SIGN DETECTION; EYE-MOVEMENTS; HUMAN GAZE; CLASSIFICATION; EXPERIENCE; SEQUENCES; VISION; NIGHT; TASK;
D O I
10.1109/TITS.2016.2535402
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
A traffic driving environment is a complex and dynamically changing scene. When driving, drivers always allocate their attention to the most important and salient areas or targets. Traffic saliency detection, which computes the salient and prior areas or targets in a specific driving environment, is an indispensable part of intelligent transportation systems and could be useful in supporting autonomous driving, traffic sign detection, driving training, car collision warning, and other tasks. Recently, advances in visual attention models have provided substantial progress in describing eye movements over simple stimuli and tasks such as free viewing or visual search. However, to date, there exists no computational framework that can accurately mimic a driver's gaze behavior and saliency detection in a complex traffic driving environment. In this paper, we analyzed the eye-tracking data of 40 subjects consisted of nondrivers and experienced drivers when viewing 100 traffic images. We found that a driver's attention was mostly concentrated on the end of the road in front of the vehicle. We proposed that the vanishing point of the road can be regarded as valuable top-down guidance in a traffic saliency detection model. Subsequently, we build a framework of a classic bottom-up and top-down combined traffic saliency detection model. The results show that our proposed vanishing-point-based top-down model can effectively simulate a driver's attention areas in a driving environment.
引用
收藏
页码:2051 / 2062
页数:12
相关论文
共 56 条
[41]  
Rasmussen C., 2012, BRIT MACH VIS C BMVC, P1
[42]   CONTROL OF SELECTIVE PERCEPTION USING BAYES NETS AND DECISION-THEORY [J].
RIMEY, RD ;
BROWN, CM .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 1994, 12 (2-3) :173-207
[43]   Rapid biologically-inspired scene classification using features shared with visual attention [J].
Siagian, Christian ;
Itti, Laurent .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2007, 29 (02) :300-312
[44]   Biologically Inspired Mobile Robot Vision Localization [J].
Siagian, Christian ;
Itti, Laurent .
IEEE TRANSACTIONS ON ROBOTICS, 2009, 25 (04) :861-873
[45]   The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions [J].
Tatler, Benjamin W. .
JOURNAL OF VISION, 2007, 7 (14)
[46]   FEATURE-INTEGRATION THEORY OF ATTENTION [J].
TREISMAN, AM ;
GELADE, G .
COGNITIVE PSYCHOLOGY, 1980, 12 (01) :97-136
[47]   Visual attention while driving: sequences of eye fixations made by experienced and novice drivers [J].
Underwood, G ;
Chapman, P ;
Brocklehurst, N ;
Underwood, J ;
Crundall, D .
ERGONOMICS, 2003, 46 (06) :629-646
[48]   Selective searching while driving: the role of experience in hazard detection and general surveillance [J].
Underwood, G ;
Crundall, D ;
Chapman, P .
ERGONOMICS, 2002, 45 (01) :1-12
[49]  
Underwood G., 2002, Transportation Research Part F: Traffic Psychology and Behavior, V5, P87, DOI DOI 10.1016/S1369-8478(02)00008-6
[50]   Modeling attention to salient proto-objects [J].
Walther, Dirk ;
Koch, Christof .
NEURAL NETWORKS, 2006, 19 (09) :1395-1407