A substructure transfer reinforcement learning method based on metric learning

被引:0
作者
Chai, Peihua [1 ,2 ]
Chen, Bilian [1 ,2 ]
Zeng, Yifeng [3 ]
Yu, Shenbao [4 ]
机构
[1] Xiamen Univ, Sch Aerosp Engn, Dept Automat, Xiamen 361005, Peoples R China
[2] Xiamen Key Lab Big Data Intelligent Anal & Decis M, Xiamen 361005, Peoples R China
[3] Northumbria Univ, Dept Comp & Informat Sci, Newcastle Upon Tyne NE1 8ST, England
[4] Fujian Normal Univ, Coll Comp & Cyber Secur, Fuzhou 350108, Peoples R China
基金
中国国家自然科学基金;
关键词
Transfer learning; Reinforcement learning; Distance measure; Markov decision process;
D O I
10.1016/j.neucom.2024.128071
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer reinforcement learning has gained significant traction in recent years as a critical research area, focusing on bolstering agents' decision-making prowess by harnessing insights from analogous tasks. The primary transfer learning method involves identifying the appropriate source domains, sharing specific knowledge structures and subsequently transferring the shared knowledge to novel tasks. However, existing transfer methods exhibit a pronounced dependency on high task similarity and an abundance of source data. Consequently, we attempt to formulate a more efficacious approach that optimally exploits the previous learning experiences to direct an agent's exploration as it learns new tasks. Specifically, we introduce a novel transfer learning paradigm rooted within the distance measure in the Markov chain, denoted as Distance Measure Substructure Transfer Reinforcement Learning (DMS-TRL). The core idea involves partitioning the Markov chain into the most basic small Markov units, which contain basic information about the agent's transfer between two states, and then followed by employing a new distance measure technique to find the most similar structure, which is also the most suitable for transfer. Finally, we propose a policy transfer method to transfer knowledge through the Q table from the selected Markov unit to the target task. Through a series of experiments conducted on discrete Gridworld scenarios, we compare our approach with state-of-the-art learning methods. The results clearly illustrate that DMS-TRL can adeptly identify optimal policy in target tasks, exhibiting swifter convergence.
引用
收藏
页数:11
相关论文
共 43 条
  • [1] Agarwal Rishabh, 2021, 9 INT C LEARN REPR I
  • [2] Arndt K, 2020, IEEE INT CONF ROBOT, P2725, DOI [10.1109/ICRA40945.2020.9196540, 10.1109/icra40945.2020.9196540]
  • [3] Boutsioukis G., 2011, European Workshop on Reinforcement Learning, P249
  • [4] Cai H., 2022, Computer Vision and Pattern Recognition, P159
  • [5] Fujimoto S, 2018, PR MACH LEARN RES, V80
  • [6] Ghifary M, 2014, LECT NOTES ARTIF INT, V8862, P898, DOI 10.1007/978-3-319-13560-1_76
  • [7] DM-DQN: Dueling Munchausen deep Q network for robot path planning
    Gu, Yuwan
    Zhu, Zhitao
    Lv, Jidong
    Shi, Lin
    Hou, Zhenjie
    Xu, Shoukun
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (04) : 4287 - 4300
  • [8] Symmetric actor-critic deep reinforcement learning for cascade quadrotor flight control
    Han, Haoran
    Cheng, Jian
    Xi, Zhilong
    Lv, Maolong
    [J]. NEUROCOMPUTING, 2023, 559
  • [9] Hessel M, 2018, AAAI CONF ARTIF INTE, P3215
  • [10] Huang B., 2022, 10 INT C LEARN REPR