Target Following for an Autonomous Underwater Vehicle Using Regularized ELM-based Reinforcement Learning

被引:0
|
作者
Sun, Tingting [1 ]
He, Bo [1 ]
Nian, Rui [1 ]
Yan, Tianhong [2 ]
机构
[1] Ocean Univ China, Sch Informat Sci & Engn, Qingdao, Peoples R China
[2] China Jiliang Univ, Sch Mechatron Engn, Hangzhou, Zhejiang, Peoples R China
来源
OCEANS 2015 - MTS/IEEE WASHINGTON | 2015年
关键词
reinforcement learning; extreme learning machine; autonomous underwater vehicle; target following; machine learning; MACHINE; REGRESSION;
D O I
暂无
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
In order to achieve autonomy and intelligence of autonomous underwater vehicle (AUV), dynamic target following in the unknown environment is one of the important problems to solve. Reinforcement learning (RL) offers the possibility of learning a policy to solve a particular task without manual intervention and previous experience. However, RL algorithms are not competent for continuous space problems existing universally in underwater environment. In this paper, the method of an AUV using regularized extreme learning machine (RELM) based RL is proposed for following the moving target. The most typical and common method of RL used in applications is Q learning, which can generate the proper policy for AUV control. And RELM is proposed as a modification of ELM, which can guarantee the generalization performance and work with continuous states and actions. The simulation results have shown that AUV successfully tracks and follows the moving target by the proposed method.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] Taming an Autonomous Surface Vehicle for Path Following and Collision Avoidance Using Deep Reinforcement Learning
    Meyer, Eivind
    Robinson, Haakon
    Rasheed, Adil
    San, Omer
    IEEE ACCESS, 2020, 8 : 41466 - 41481
  • [32] Autonomous Vehicle for Obstacle Detection and Avoidance Using Reinforcement Learning
    Arvind, C. S.
    Senthilnath, J.
    SOFT COMPUTING FOR PROBLEM SOLVING, SOCPROS 2018, VOL 1, 2020, 1048 : 55 - 66
  • [33] Wireless Control of Autonomous Guided Vehicle Using Reinforcement Learning
    Ana, Pedro M. de Sant
    Marchenko, Nikolaj
    Popovski, Petar
    Soret, Beatriz
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [34] Reinforcement Learning for Autonomous Vehicle using MPC in Highway Situation
    Kim, Yujin
    Pae, Dong-Sung
    Jang, Sun-Ho
    Kang, Seong-Woo
    Lim, Myo-Taeg
    2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,
  • [35] Path Following for an Autonomous Underwater Vehicle Using GP-LOS
    Yang, Xiao
    Shen, Yue
    Wang, Kaihong
    Sha, Qixin
    He, Bo
    Yan, Tianhong
    OCEANS 2016 - SHANGHAI, 2016,
  • [36] Heuristic Reinforcement Learning Based Overtaking Decision for an Autonomous Vehicle
    Du, Guodong
    Zou, Yuan
    Zhang, Xudong
    Dong, Guoshun
    Yin, Xin
    IFAC PAPERSONLINE, 2021, 54 (10): : 59 - 66
  • [37] Generative adversarial interactive imitation learning for path following of autonomous underwater vehicle
    Jiang, Dong
    Huang, Jie
    Fang, Zheng
    Cheng, Chunxi
    Sha, Qixin
    He, Bo
    Li, Guangliang
    OCEAN ENGINEERING, 2022, 260
  • [38] ELM-Based Signal Detection Scheme of MIMO System Using Auto Encoder
    Long, Fei
    Yan, Xin
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT VI, 2017, 10639 : 499 - 509
  • [39] A Novel Path Planning Method Based on Extreme Learning Machine for Autonomous Underwater Vehicle
    Dong, Diya
    He, Bo
    Liu, Yang
    Nian, Rui
    Yan, Tianhong
    OCEANS 2015 - MTS/IEEE WASHINGTON, 2015,
  • [40] Detection of Markers Using Deep Learning for Docking of Autonomous Underwater Vehicle
    Yahya, M. F.
    Arshad, M. R.
    2017 IEEE 2ND INTERNATIONAL CONFERENCE ON AUTOMATIC CONTROL AND INTELLIGENT SYSTEMS (I2CACIS), 2017, : 179 - 184