CloudNavi: Toward Ubiquitous Indoor Navigation Service with 3D Point Clouds

被引:17
|
作者
Teng, Xiaoqiang [1 ]
Guo, Deke [1 ,2 ]
Guo, Yulan [3 ,4 ]
Zhou, Xiaolei [5 ]
Liu, Zhong [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha 410073, Hunan, Peoples R China
[2] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300350, Peoples R China
[3] Natl Univ Def Technol, Coll Elect Sci, Changsha 410073, Hunan, Peoples R China
[4] Sun Yat Sen Univ, Sch Elect & Commun Engn, Guangzhou 510275, Guangdong, Peoples R China
[5] Natl Univ Def Technol, 63 Res Inst, Nanjing 210089, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Indoor navigation; point cloud processing; mobile crowdsourcing; 3D path-map; indoor localization; LOCALIZATION; ACCURATE;
D O I
10.1145/3216722
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid development of mobile computing has prompted indoor navigation to be one of the most attractive and promising applications. Conventional designs of indoor navigation systems depend on either infrastructures or indoor floor maps. This article presents CloudNavi. a ubiquitous indoor navigation solution, which relies on the point clouds acquired by the 3D camera embedded in a mobile device. Particularly, CloudNavi first efficiently infers the walking trace of each user from captured point clouds and inertial data. Many shared walking traces and associated point clouds are combined to generate the point cloud traces, which are then used to generate a 3D path-map. Accordingly, CloudNavi can accurately estimate the location of a user by fusing point clouds and inertial data using a particle filter algorithm and then guiding the user to its destination from its current location. Extensive experiments are conducted on office building and shopping mall datasets. Experimental results indicate that CloudNavi exhibits outstanding navigation performance in both office buildings and shopping malls and obtains around 34% improvement compared with the state-of-the-art method.
引用
收藏
页数:28
相关论文
共 50 条
  • [1] Development of Navigation Network Models for Indoor Path Planning Using 3D Semantic Point Clouds
    Hou, Jiwei
    Huebner, Patrick
    Iwaszczuk, Dorota
    APPLIED SCIENCES-BASEL, 2025, 15 (03):
  • [2] Door detection in 3D coloured point clouds of indoor environments
    Quintana, B.
    Prieto, S. A.
    Adan, A.
    Bosche, F.
    AUTOMATION IN CONSTRUCTION, 2018, 85 : 146 - 166
  • [3] Autonomous indoor 3D navigation
    Vasin Y.G.
    Osipov M.P.
    Egorov A.A.
    Yasakov Y.V.
    Pattern Recognition and Image Analysis, 2015, 25 (03) : 373 - 377
  • [4] Shape Grammar Approach to 3D Modeling of Indoor Environments Using Point Clouds
    Tran, H.
    Khoshelham, K.
    Kealy, A.
    Diaz-Vilarino, L.
    JOURNAL OF COMPUTING IN CIVIL ENGINEERING, 2019, 33 (01)
  • [5] Indoor 3D reconstruction from point clouds for optimal routing in complex buildings to support disaster management
    Nikoohemat, Shayan
    Diakite, Abdoulaye A.
    Zlatanova, Sisi
    Vosselman, George
    AUTOMATION IN CONSTRUCTION, 2020, 113
  • [6] Low Cost 3D Mapping for Indoor Navigation
    Bergeon, Yves
    Hadda, Imed
    Krivanek, Vaclav
    Motsch, Jean
    Stefek, Alexandr
    INTERNATIONAL CONFERENCE ON MILITARY TECHNOLOGIES (ICMT 2015), 2015, : 689 - 693
  • [7] Detecting Elevators and Escalators in 3D Pedestrian Indoor Navigation
    Kaiser, Susanna
    Lang, Christopher
    2016 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN), 2016,
  • [8] An automated 3D modeling of topological indoor navigation network
    Jamali A.
    Abdul Rahman A.
    Boguslawski P.
    Kumar P.
    Gold C.M.
    GeoJournal, 2017, 82 (1) : 157 - 170
  • [9] AN AUTOMATED 3D INDOOR TOPOLOGICAL NAVIGATION NETWORK MODELLING
    Jamali, Ali
    Rahman, Alias Abdul
    Boguslawski, Pawel
    Gold, Christopher M.
    ISPRS JOINT INTERNATIONAL GEOINFORMATION CONFERENCE 2015, 2015, II-2 (W2): : 47 - 53
  • [10] Alignment of 3D Point Clouds with a Dominant Ground Plane
    Pandey, Gaurav
    Giri, Shashank
    Mcbride, Jame R.
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 2143 - 2150