Enhancing perception for the visually impaired with deep learning techniques and low-cost wearable sensors

被引:32
作者
Bauer, Zuria [1 ]
Dominguez, Alejandro [1 ]
Cruz, Edmanuel [1 ]
Gomez-Donoso, Francisco [1 ]
Orts-Escolano, Sergio [1 ]
Cazorla, Miguel [1 ]
机构
[1] Univ Alicante, Inst Comp Res, POB 99, Alicante 03080, Spain
关键词
Visual impaired assistant; Deep learning; Outdoors; Depth from monocular frames;
D O I
10.1016/j.patrec.2019.03.008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As estimated by the World Health Organization, there are millions of people who lives with some form of vision impairment. As a consequence, some of them present mobility problems in outdoor environments. With the aim of helping them, we propose in this work a system which is capable of delivering the position of potential obstacles in outdoor scenarios. Our approach is based on non-intrusive wearable devices and focuses also on being low-cost. First, a depth map of the scene is estimated from a color image, which provides 3D information of the environment. Then, an urban object detector is in charge of detecting the semantics of the objects in the scene. Finally, the three-dimensional and semantic data is summarized in a simpler representation of the potential obstacles the users have in front of them. This information is transmitted to the user through spoken or haptic feedback. Our system is able to run at about 3.8 fps and achieved a 87.99% mean accuracy in obstacle presence detection. Finally, we deployed our system in a pilot test which involved an actual person with vision impairment, who validated the effectiveness of our proposal for improving its navigation capabilities in outdoors. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:27 / 36
页数:10
相关论文
共 27 条
[1]  
[Anonymous], 2015, ARXIV E PRINTS
[2]  
[Anonymous], 2006, Advances in Neural Information Processing Systems, DOI [10.1109/TPAMI.2015.2505283a, DOI 10.1109/TPAMI.2015.2505283A]
[3]  
Bolme DS, 2010, PROC CVPR IEEE, P2544, DOI 10.1109/CVPR.2010.5539960
[4]   A survey of assistive technologies and applications for blind users on mobile platforms: a review and foundation for research [J].
Csapo, Adam ;
Wersenyi, Gyoergy ;
Nagy, Hunor ;
Stockman, Tony .
JOURNAL ON MULTIMODAL USER INTERFACES, 2015, 9 (04) :275-286
[5]   A Deep-learning-based Floor Detection System for the Visually Impaired [J].
Delahoz, Yueng ;
Labrador, Miguel A. .
2017 IEEE 15TH INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, 15TH INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, 3RD INTL CONF ON BIG DATA INTELLIGENCE AND COMPUTING AND CYBER SCIENCE AND TECHNOLOGY CONGRESS(DASC/PICOM/DATACOM/CYBERSCI, 2017, :883-888
[6]   A New Dataset and Performance Evaluation of a Region-Based CNN for Urban Object Detection [J].
Dominguez-Sanchez, Alex ;
Cazorla, Miguel ;
Orts-Escolano, Sergio .
ELECTRONICS, 2018, 7 (11)
[7]  
Dosovitskiy A., 2014, ARXIV E PRINTS
[8]  
Eigen D., 2014, Advances in neural information processing systems, P2366, DOI [DOI 10.5555/2969033.2969091, DOI 10.1007/978-3-540-28650-9_5]
[9]   Sensor-Based Assistive Devices for Visually-Impaired People: Current Status, Challenges, and Future Directions [J].
Elmannai, Wafa ;
Elleithy, Khaled .
SENSORS, 2017, 17 (03)
[10]   Mobile assistive technologies for the visually impaired [J].
Hakobyan, Lilit ;
Lumsden, Jo ;
O'Sullivan, Dympna ;
Bartlett, Hannah .
SURVEY OF OPHTHALMOLOGY, 2013, 58 (06) :513-528