Adversarial Reinforcement Learning Based Robustification of Highlighted Map for Mobile Robot Localization

被引:0
作者
Yoshimura, Ryota [1 ,2 ]
Maruta, Ichiro [2 ]
Fujimoto, Kenji [2 ]
Sato, Ken [3 ]
Kobayashi, Yusuke [3 ]
机构
[1] Tokyo Metropolitan Ind Technol Res Inst, Reg Technol Support Div, Tokyo, Japan
[2] Kyoto Univ, Dept Aeronaut & Astronaut, Kyoto, Japan
[3] Tokyo Metropolitan Ind Technol Res Inst, Digitalizat Promot Sect, Tokyo, Japan
来源
2021 60TH ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS OF JAPAN (SICE) | 2021年
关键词
Adversarial reinforcement learning; highlighted map; mobile robots; Monte Carlo localization; particle filters;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A highlighted map, where objects with unique shapes are highlighted, has been studied for mobile robot localization. This map improves the localization accuracy without adding any sensors or online computations for localization. In addition, it can be used in various particle-filter-based localization algorithms. For generating a highlighted map, reinforcement learning has been used. Since this method generates the highlighted map by utilizing a limited number of the actual sensor measurement data, the generated map is vulnerable to unexpected sensor measurement noise. In this paper, the robustification method of a highlighted map is proposed. Our proposed method introduces a virtual obstacle that causes measurement noise, and learns both the worst-case obstacle behavior and the optimal highlighted map simultaneously based on adversarial reinforcement learning. We perform a numerical simulation to verify the robustness of the map.
引用
收藏
页码:599 / 605
页数:7
相关论文
共 50 条
[21]   MOBILE ROBOT OBSTACLE AVOIDANCE BASE ON DEEP REINFORCEMENT LEARNING [J].
Feng, Shumin ;
Ren, Hailin ;
Wang, Xinran ;
Ben-Tzvi, Pinhas .
PROCEEDINGS OF THE ASME INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, 2019, VOL 5A, 2020,
[22]   An adversarial reinforcement learning based system for cyber security [J].
Xia, Song ;
Qiu, Meikang ;
Jiang, Hao .
4TH IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2019) / 3RD INTERNATIONAL SYMPOSIUM ON REINFORCEMENT LEARNING (ISRL 2019), 2019, :227-230
[23]   A Path-Integral-Based Reinforcement Learning Algorithm for Path Following of an Autoassembly Mobile Robot [J].
Zhu, Wei ;
Guo, Xian ;
Fang, Yongchun ;
Zhang, Xueyou .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (11) :4487-4499
[24]   Deep reinforcement learning-based local path planning in dynamic environments for mobile robot☆ [J].
Tao, Bodong ;
Kim, Jae-Hoon .
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (10)
[25]   Path Planning of Mobile Robot in Dynamic Obstacle Avoidance Environment Based on Deep Reinforcement Learning [J].
Zhang, Qingfeng ;
Ma, Wenpeng ;
Zheng, Qingchun ;
Zhai, Xiaofan ;
Zhang, Wenqian ;
Zhang, Tianchang ;
Wang, Shuo .
IEEE ACCESS, 2024, 12 :189136-189152
[26]   Mobile Robot Navigation Using Reinforcement Learning Based on Neural Network with Short Term Memory [J].
Gavrilov, Andrey V. ;
Lenskiy, Artem .
ADVANCED INTELLIGENT COMPUTING, 2011, 6838 :210-+
[27]   Sample-Efficient Reinforcement Learning for Pose Regulation of a Mobile Robot [J].
Brescia, Walter ;
De Cicco, Luca ;
Mascolo, Saverio .
2022 11TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND INFORMATION SCIENCES (ICCAIS), 2022, :42-47
[28]   State Recognition and Reinforcement Learning for Two-Wheel Mobile Robot [J].
Yasutake, Yoshihiro .
2018 18TH INTERNATIONAL SYMPOSIUM ON COMMUNICATIONS AND INFORMATION TECHNOLOGIES (ISCIT), 2018, :452-457
[29]   Causal deconfounding deep reinforcement learning for mobile robot motion planning [J].
Tang, Wenbing ;
Wu, Fenghua ;
Lin, Shang-wei ;
Ding, Zuohua ;
Liu, Jing ;
Liu, Yang ;
He, Jifeng .
KNOWLEDGE-BASED SYSTEMS, 2024, 303
[30]   Mobile Service Robot Path Planning Using Deep Reinforcement Learning [J].
Kumaar, A. A. Nippun ;
Kochuvila, Sreeja .
IEEE ACCESS, 2023, 11 :100083-100096