Seeing Eye Phone: a smart phone-based indoor localization and guidance system for the visually impaired

被引:0
作者
Dong Zhang
Dah-Jye Lee
Brandon Taylor
机构
[1] Sun Yat-sen University,School of Information Science and Technology
[2] Brigham Young University,Department of Electrical and Computer Engineering
来源
Machine Vision and Applications | 2014年 / 25卷
关键词
Smart phone; POSE estimation; Visually impaired ; Indoor guidance system;
D O I
暂无
中图分类号
学科分类号
摘要
In order to help the visually impaired as they navigate unfamiliar environment such as public buildings, this paper presents a novel smart phone, vision-based indoor localization, and guidance system, called Seeing Eye Phone. This system requires a smart phone from the user and a server. The smart phone captures and transmits images of the user facing forward to the server. The server processes the phone images to detect and describe 2D features by SURF and then matches them to the 2D features of the stored map images that include their corresponding 3D information of the building. After features are matched, Direct Linear Transform runs on a subset of correspondences to find a rough initial pose estimate and the Levenberg–Marquardt algorithm further refines the pose estimate to find a more optimal solution. With the estimated pose and the camera’s intrinsic parameters, the location and orientation of the user are calculated using 3D location correspondence data stored for features of each image. Positional information is then transmitted back to the smart phone and communicated to the user via text-to-speech. This indoor guiding system uses efficient algorithms such as SURF, homographs, multi-view geometry, and 3D to 2D reprojection to solve a very unique problem that will benefit the visually impaired. The experimental results demonstrate the feasibility of using a simple machine vision system design to accomplish a complex task and the potential of building a commercial product based on this design.
引用
收藏
页码:811 / 822
页数:11
相关论文
共 36 条
[1]  
Shoval S(2003)NavBelt and the Guide–Cane IEEE Robot. Autom. Mag. 10 9-20
[2]  
Ulrich I(2007)Tongue vision: a fuzzy outlook for an unpalatable technology Spect. IEEE 44 44-45
[3]  
Borenstein J(2004)Stated preferences for components of a personal guidance system for nonvisual navigation Design 98 135-147
[4]  
Upson S(2007)Independent outdoor mobility for the blind Virtual Rehabil. 2007 114-120
[5]  
Golledge RG(2009)Voice operated guidance systems for vision impaired people: investigating a user-centered open source model JDCTA 3 60-68
[6]  
Marston JR(2009)Voice operated guidance systems for vision impaired people: investigating a user-centered open source model JDCTA 3 60-68
[7]  
Loomis JM(2006)Robot-assisted wayfinding for the visually impaired in structured indoor environments Auton. Robots. 21 29-41
[8]  
Klatzky RL(2007)MonoSLAM: real-time single camera SLAM IEEE Trans. Pattern Anal. Mach. Intell. 29 1052-1067
[9]  
Sanchez JH(1981)A computer algorithm for reconstructing a scene from two projections Nature 293 133-135
[10]  
Aguayo FA(2008)Speeded-up robust features (SURF) original publication Comput. Vis. Image Underst. 110 346-359