Context-aware obstacle detection for navigation by visually impaired

被引:28
作者
Gharani, Pedram [1 ]
Karimi, Hassan A. [1 ]
机构
[1] Univ Pittsburgh, Sch Comp & Informat, Geoinformat Lab, Pittsburgh, PA USA
关键词
Obstacle detection; Indoor navigation; Visual navigation; Visually impaired; OPTICAL-FLOW ESTIMATION; MOBILE ROBOT; SYSTEM; VISION; SENSOR;
D O I
10.1016/j.imavis.2017.06.002
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a context-aware smartphone-based based visual obstacle detection approach to aid visually impaired people in navigating indoor environments. The approach is based on processing two consecutive frames (images), computing optical flow, and tracking certain points to detect obstacles. The frame rate of the video stream is determined using a context-aware data fusion technique for the sensors on smartphones. Through an efficient and novel algorithm, a point dataset on each consecutive frames is designed and evaluated to check whether the points belong to an obstacle. In addition to determining the points based on the texture in each frame, our algorithm also considers the heading of user movement to find critical areas on the image plane. We validated the algorithm through experiments by comparing it against two comparable algorithms. The experiments were conducted in different indoor settings and the results based on precision, recall, accuracy, and f-measure were compared and analyzed. The results show that, in comparison to the other two widely used algorithms for this process, our algorithm is more precise. We also considered time-to-contact parameter for clustering the points and presented the improvement of the performance of clustering by using this parameter. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:103 / 115
页数:13
相关论文
共 62 条
[1]  
[Anonymous], 2008, WORKSH COMP VIS APPL
[2]  
[Anonymous], 2013, THESIS
[3]   Activity recognition from user-annotated acceleration data [J].
Bao, L ;
Intille, SS .
PERVASIVE COMPUTING, PROCEEDINGS, 2004, 3001 :1-17
[4]  
Batavia P.H., 1999, Computer Standards Interfaces, V20, P466, DOI DOI 10.1016/S0920-5489(99)91018-8
[5]   The computation of optical flow [J].
Beauchemin, SS ;
Barron, JL .
ACM COMPUTING SURVEYS, 1995, 27 (03) :433-467
[6]  
Benjamin J.M., 1973, P SAN DIEG BIOM S
[7]  
Boroujeni NS, 2012, IEEE IMAGE PROC, P65, DOI 10.1109/ICIP.2012.6466796
[8]   A Navigation Aid for Blind People [J].
Bousbia-Salah, Mounir ;
Bettayeb, Maamar ;
Larbi, Allal .
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2011, 64 (3-4) :387-400
[9]   A neural model of how the brain computes heading from optic flow in realistic scenes [J].
Browning, N. Andrew ;
Grossberg, Stephen ;
Mingolla, Ennio .
COGNITIVE PSYCHOLOGY, 2009, 59 (04) :320-356
[10]  
Camus T., 1994, REAL TIME OPTICAL FL