Local and soft feature selection for value function approximation in batch reinforcement learning for robot navigation

被引:0
|
作者
Fatemeh Fathinezhad
Peyman Adibi
Bijan Shoushtarian
Jocelyn Chanussot
机构
[1] Faculty of Computer Engineering,Artificial Intelligence Department
[2] University of Grenoble Alpes,undefined
[3] CNRS,undefined
[4] Grenoble INP,undefined
[5] GIPSA-lab,undefined
来源
The Journal of Supercomputing | 2024年 / 80卷
关键词
Reinforcement learning; Value function approximation; Local relevance feature selection; Robot navigation;
D O I
暂无
中图分类号
学科分类号
摘要
This paper proposes a novel method for robot navigation in high-dimensional environments that reduce the dimension of the state space using local and soft feature selection. The algorithm selects relevant features based on local correlations between states, avoiding duplicate inappropriate information and adjusting sensor values accordingly. By optimizing the value function approximation based on the local weighted features of states in the reinforcement learning process, the method shows improvements in the robot’s motion flexibility, learning time, the distance traveled to reach its goal, and the minimization of collisions with obstacles. This approach was tested on an E-puck robot using the Webots robot simulator in different test environments.
引用
收藏
页码:10720 / 10745
页数:25
相关论文
共 50 条
  • [41] Dynamic Spectrum Anti-Jamming With Reinforcement Learning Based on Value Function Approximation
    Zhu, Xinyu
    Huang, Yang
    Wang, Shaoyu
    Wu, Qihui
    Ge, Xiaohu
    Liu, Yuan
    Gao, Zhen
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2023, 12 (02) : 386 - 390
  • [42] Adaptive importance sampling for value function approximation in off-policy reinforcement learning
    Hachiya, Hirotaka
    Akiyama, Takayuki
    Sugiayma, Masashi
    Peters, Jan
    NEURAL NETWORKS, 2009, 22 (10) : 1399 - 1410
  • [43] Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning
    Rozada, Sergio
    Paternain, Santiago
    Marques, Antonio
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 1634 - 1649
  • [44] A Novel Artificial Hydrocarbon Networks Based Value Function Approximation in Hierarchical Reinforcement Learning
    Ponce, Hiram
    ADVANCES IN SOFT COMPUTING, MICAI 2016, PT II, 2017, 10062 : 211 - 225
  • [45] Foresight Social-aware Reinforcement Learning for Robot Navigation
    Zhou, Yanying
    Li, Shijie
    Garcke, Jochen
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 3501 - 3507
  • [46] Reinforcement learning based on FNN and its application in robot navigation
    College of Information Science and Engineering, Northeastern University, Shenyang 110004, China
    Kongzhi yu Juece Control Decis, 2007, 5 (525-529+534):
  • [47] Safe Robot Navigation Using Constrained Hierarchical Reinforcement Learning
    Roza, Felippe Schmoeller
    Rasheed, Hassan
    Roscher, Karsten
    Ning, Xiangyu
    Guennemann, Stephan
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 737 - 742
  • [48] Using Reinforcement Learning to Attenuate for Stochasticity in Robot Navigation Controllers
    Gillespie, James
    Rano, Inaki
    Siddique, Nazmul
    Santos, Jose
    Khamassi, Mehdi
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 705 - 713
  • [49] Robust Quantum-Inspired Reinforcement Learning for Robot Navigation
    Dong, Daoyi
    Chen, Chunlin
    Chu, Jian
    Tarn, Tzyh-Jong
    IEEE-ASME TRANSACTIONS ON MECHATRONICS, 2012, 17 (01) : 86 - 97
  • [50] A Brief Survey: Deep Reinforcement Learning in Mobile Robot Navigation
    Jiang, Haoge
    Wang, Han
    Yau, Wei-Yun
    Wan, Kong-Wah
    PROCEEDINGS OF THE 15TH IEEE CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2020), 2020, : 592 - 597