Integrated Localization and Tracking for AUV With Model Uncertainties via Scalable Sampling-Based Reinforcement Learning Approach

被引:34
作者
Yan, Jing [1 ]
Li, Xin [1 ]
Yang, Xian [2 ]
Luo, Xiaoyuan [1 ]
Hua, Changchun [1 ]
Guan, Xinping [3 ]
机构
[1] Yanshan Univ, Dept Automat, Qinhuangdao 066004, Hebei, Peoples R China
[2] Yanshan Univ, Inst Informat Sci & Engn, Qinhuangdao 066004, Hebei, Peoples R China
[3] Shanghai Jiao Tong Univ, Dept Automat, Shanghai 200240, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2022年 / 52卷 / 11期
关键词
Location awareness; Uncertainty; Target tracking; Clocks; Computational modeling; Synchronization; Global Positioning System; Autonomous underwater vehicle (AUV); localization; reinforcement learning (RL); tracking; uncertain system; AUTONOMOUS UNDERWATER VEHICLE; TRAJECTORY TRACKING; JOINT LOCALIZATION; SENSOR NETWORKS; OPTIMIZATION;
D O I
10.1109/TSMC.2021.3129534
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article studies the joint localization and tracking issue for the autonomous underwater vehicle (AUV), with the constraints of asynchronous time clock in cyberchannels and model uncertainty in physical channels. More specifically, we develop a reinforcement learning (RL)-based asynchronous localization algorithm to localize the position of AUV, where the time clock of AUV is not required to be well synchronized with the real time. Based on the estimated position, a scalable sampling strategy called multivariate probabilistic collocation method with orthogonal fractional factorial design (M-PCM-OFFD) is employed to evaluate the time-varying uncertain model parameters of AUV. After that, an RL-based tracking controller is designed to drive AUV to the desired target point. Besides that, the performance analyses for the integration solution are also presented. Of note, the advantages of our solution are highlighted as: 1) the RL-based localization algorithm can avoid local optimal in traditional least-square methods; 2) the M-PCM-OFFD-based sampling strategy can address the model uncertainty and reduce the computational cost; and 3) the integration design of localization and tracking can reduce the communication energy consumption. Finally, simulation and experiment demonstrate that the proposed localization algorithm can effectively eliminate the impact of asynchronous clock, and more importantly, the integration of M-PCM-OFFD in the RL-based tracking controller can find accurate optimization solutions with limited computational costs.
引用
收藏
页码:6952 / 6967
页数:16
相关论文
共 50 条
  • [1] AUV-Aided Localization for Internet of Underwater Things: A Reinforcement-Learning-Based Method
    Yan, Jing
    Gong, Yadi
    Chen, Cailian
    Luo, Xiaoyuan
    Guan, Xinping
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (10): : 9728 - 9746
  • [2] Efficient Beacon-Aided AUV Localization: A Reinforcement Learning Based Approach
    Liu, Chuhuan
    Lv, Zefang
    Xiao, Liang
    Su, Wei
    Ye, Liqing
    Yang, Helin
    You, Xudong
    Han, Shuai
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (06) : 7799 - 7811
  • [3] Secure and Cooperative Target Tracking via AUV Swarm: A Reinforcement Learning Approach
    Yang, Zhaoqi
    Du, Jun
    Xia, Zhaoyue
    Jiang, Chunxiao
    Benslimane, Abderrahim
    Ren, Yong
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [4] Information-Entropy-Based Trajectory Planning for AUV-Aided Network Localization: A Reinforcement Learning Approach
    Huang, Peishuo
    Li, Yichen
    Wang, Yiyin
    Guan, Xinping
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (02): : 2122 - 2134
  • [5] Joint localisation and tracking for autonomous underwater vehicle: a reinforcement learning-based approach
    Yan, Jing
    Li, Xin
    Luo, Xiaoyuan
    Gong, Yadi
    Guan, Xinping
    IET CONTROL THEORY AND APPLICATIONS, 2019, 13 (17) : 2856 - 2865
  • [6] Secure Localization for Underwater Wireless Sensor Networks via AUV Cooperative Beamforming With Reinforcement Learning
    Fan, Rong
    Boukerche, Azzedine
    Pan, Pan
    Jin, Zhigang
    Su, Yishan
    Dou, Fei
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (02) : 924 - 938
  • [7] Reinforcement learning based model-free optimized trajectory tracking strategy design for an AUV
    Duan, Kairong
    Fong, Simon
    Chen, C. L. Philip
    NEUROCOMPUTING, 2022, 469 : 289 - 297
  • [8] Underwater Target Tracking Based on Interrupted Software-Defined Multi-AUV Reinforcement Learning: A Multi-AUV Time-Saving MARL Approach
    Zhu, Shengchao
    Han, Guangjie
    Lin, Chuan
    Zhang, Yu
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2025, 24 (03) : 2124 - 2136
  • [9] Asynchronous Localization with Stratification Effect for Underwater Target: A Reinforcement Learning-based Approach
    Gong, Yadi
    Li, Xin
    Yan, Jing
    Luo, Xiaoyuan
    2019 3RD INTERNATIONAL SYMPOSIUM ON AUTONOMOUS SYSTEMS (ISAS 2019), 2019, : 91 - 96
  • [10] Coalition Game of Radar Network for Multitarget Tracking via Model-Based Multiagent Reinforcement Learning
    Xiong, Kui
    Zhang, Tianxian
    Cui, Guolong
    Wang, Shiyuan
    Kong, Lingjiang
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2023, 59 (03) : 2123 - 2140