DL-RNN: An Accurate Indoor Localization Method via Double RNNs

被引:36
作者
Bai, Siqi [1 ]
Yan, Mingjiang [1 ]
Wan, Qun [1 ]
He, Long [2 ]
Wang, Xinrui [2 ]
Li, Junlin [2 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
[2] 208 Res Inst China Ordnance Ind, Beijing 102200, Peoples R China
基金
中国国家自然科学基金;
关键词
Fingerprinting localization; indoor localization; received signal strength; recurrent neural network; FUSION;
D O I
10.1109/JSEN.2019.2936412
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Wireless fingerprinting localization method learns a mapping function from a fingerprint measurement to the estimated location, which is more suitable for complex indoor environments than the propagation model-based methods. However, most traditional methods only consider the location matching at single time or space points, but ignore the fact that correlations in the measurement sequences may improve the accuracy and robustness of mobile positioning. Recently, recurrent neural network (RNN) has been readily applied in many fields such as speech, language, and video processing, and it has flexible and powerful processing capabilities for nonlinear and high-dimensional sequence input. In this paper, we propose DL-RNN, a real-time wireless localization model, which consists of double RNNs: the first RNN estimates the location based on the historical observation signal, and the second RNN filters the location based on the historical estimated location, which further improves the localization performance. Both the simulation and measured experiments show the high accuracy and robustness of the proposed algorithm.
引用
收藏
页码:286 / 295
页数:10
相关论文
共 44 条
[1]  
Abdou AS, 2016, 2016 SIXTH INTERNATIONAL CONFERENCE ON DIGITAL INFORMATION PROCESSING AND COMMUNICATIONS (ICDIPC), P1, DOI 10.1109/ICDIPC.2016.7470782
[2]  
[Anonymous], 2016, Meda, P1
[3]  
[Anonymous], 2008, Proceedings of the first ACM international workshop on Mobile entity localization and tracking in GPS-less environments
[4]  
[Anonymous], P PAC AS C LANG INF
[5]  
[Anonymous], 2015, Tech. Rep.
[6]  
[Anonymous], ANT PROP SOC INT S 2
[7]  
[Anonymous], 2016, arXiv preprint arXiv:1602.02410
[8]  
[Anonymous], 2012, On the Difficulty of Training Recurrent Neural Networks, DOI DOI 10.48550/ARXIV.1211.5063
[9]  
[Anonymous], 2015 9 EUR C ANT PRO
[10]  
[Anonymous], ARXIV190311703