A smart obstacle avoiding technology based on depth camera for blind and visually impaired people

被引:0
作者
He, Jian [1 ]
Song, Xuena [2 ]
Su, Yuhan [2 ]
Xiao, Zhonghua [2 ]
机构
[1] Beijing Univ Technol, Beijing Engn Res Ctr IoT Software & Syst, Beijing 100124, Peoples R China
[2] Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
关键词
Visual impairment; Travel aid; Feature extraction; Obstacle detection; Point cloud; SYSTEM;
D O I
10.1007/s42486-023-00136-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It remains challenging to assist BVI individuals in outdoor travel nowadays.In this paper, We propose a set of low-cost wearable obstacle avoidance devices and introduce an obstacle detection algorithm called L-PointPillars, which is based on point cloud data and is suitable for edge devices. We first model the obstacles faced by BVI individuals during outdoor travel and then establish a mapping between the information space and physical space based on point clouds. We then introduce depthwise separable convolution and attention mechanisms to develop L-PointPillars, a fast neural network for obstacle detection. This network is specifically designed for creating wearable obstacle detection devices. Finally, we implemented a wearable electronic travel aid device (WETAD) based on L-PointPillars on the Jetson Xavier NX. Experiments show that while L-PointPillars reduces the number of parameters in the original PointPillars by 75%, WETAD achieves an average obstacle detection accuracy of 95.3%. It takes an average of 144 milliseconds to process each frame during outdoor travel for BVI individuals, which is more than twice as fast as the Second network and 31% improvement compared to PointPillars.
引用
收藏
页码:382 / 395
页数:14
相关论文
共 28 条
[1]  
Ahlmark DI, 2013, IEEE WORK ADV ROBOT, P76, DOI 10.1109/ARSO.2013.6705509
[2]   Navigation Assistance for the Visually Impaired Using RGB-D Sensor With Range Expansion [J].
Aladren, A. ;
Lopez-Nicolas, G. ;
Puig, Luis ;
Guerrero, Josechu J. .
IEEE SYSTEMS JOURNAL, 2016, 10 (03) :922-932
[3]  
[Anonymous], 2012, FORM M MEMB STAT CON
[4]  
Aymaz S, 2016, 2016 39TH INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS AND SIGNAL PROCESSING (TSP), P388, DOI 10.1109/TSP.2016.7760903
[5]  
Berenguel-Baeta B, 2020, I C CONT AUTOMAT ROB, P1222, DOI [10.1109/ICARCV50220.2020.9305464, 10.1109/icarcv50220.2020.9305464]
[6]  
Bouhamed SA., 2013, INT J ADV COMPUT SC, DOI [10.14569/IJACSA.2013.040633, DOI 10.14569/IJACSA.2013.040633]
[7]  
Cheng R., 2015, GROUND OBSTACLE DETE
[8]  
Costa P, 2016, WORLD AUTOMAT CONG
[9]  
Geiger A, 2012, PROC CVPR IEEE, P3354, DOI 10.1109/CVPR.2012.6248074
[10]   Context-aware obstacle detection for navigation by visually impaired [J].
Gharani, Pedram ;
Karimi, Hassan A. .
IMAGE AND VISION COMPUTING, 2017, 64 :103-115