Knowledge Transfer in Deep Reinforcement Learning via an RL-Specific GAN-Based Correspondence Function

被引:0
作者
Ruman, Marko [1 ]
Guy, Tatiana V. [1 ,2 ]
机构
[1] Czech Acad Sci, Inst Informat Theory & Automat, Dept Adapt Syst, Prague 18200, Czech Republic
[2] Czech Univ Life Sci, Fac Econ & Management, Dept Informat Engn, Prague 16500, Czech Republic
关键词
Training; Decision making; Games; Network architecture; Generative adversarial networks; Deep reinforcement learning; Knowledge transfer; Standards; Deep learning; Markov decision process; reinforcement learning; transfer learning; knowledge transfer;
D O I
10.1109/ACCESS.2024.3497589
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep reinforcement learning has demonstrated superhuman performance in complex decision-making tasks, but it struggles with generalization and knowledge reuse-key aspects of true intelligence. This article introduces a novel approach that modifies Cycle Generative Adversarial Networks specifically for reinforcement learning, enabling effective one-to-one knowledge transfer between two tasks. Our method enhances the loss function with two new components: model loss, which captures dynamic relationships between source and target tasks, and Q-loss, which identifies states significantly influencing the target decision policy. Tested on the 2-D Atari game Pong, our method achieved 100% knowledge transfer in identical tasks and either 100% knowledge transfer or a 30% reduction in training time for a rotated task, depending on the network architecture. In contrast, using standard Generative Adversarial Networks or Cycle Generative Adversarial Networks led to worse performance than training from scratch in the majority of cases. The results demonstrate that the proposed method ensured enhanced knowledge generalization in deep reinforcement learning.
引用
收藏
页码:177204 / 177218
页数:15
相关论文
共 50 条
[21]   DGTRL: Deep graph transfer reinforcement learning method based on fusion of knowledge and data [J].
Chen, Genxin ;
Qi, Jin ;
Gao, Yu ;
Zhu, Xingjian ;
Dong, Zhenjiang ;
Sun, Yanfei .
INFORMATION SCIENCES, 2024, 658
[22]   DECAF: Deep Case-based Policy Inference for knowledge transfer in Reinforcement Learning [J].
Glatt, Ruben ;
Da Silva, Felipe Leno ;
da Costa Bianchi, Reinaldo Augusto ;
Reali Costa, Anna Helena .
EXPERT SYSTEMS WITH APPLICATIONS, 2020, 156
[23]   A Knowledge Transfer Framework Based on Deep-Reinforcement Learning for Multistage Construction Projects [J].
Xu, Jin ;
Bu, Jinfeng ;
Li, Jiexun .
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, 2024, 71 :11361-11374
[24]   Knowledge Reasoning Method Based on Deep Transfer Reinforcement Learning: DTRLpath [J].
Lin, Shiming ;
Ye, Ling ;
Zhuang, Yijie ;
Lu, Lingyun ;
Zheng, Shaoqiu ;
Huang, Chenxi ;
Kwee, Ng Yin .
CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 80 (01) :299-317
[25]   MPR-RL: Multi-Prior Regularized Reinforcement Learning for Knowledge Transfer [J].
Yang, Quantao ;
Stork, Johannes A. ;
Stoyanov, Todor .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) :7652-7659
[26]   Intelligent Anti-Jamming Based on Deep Reinforcement Learning and Transfer Learning [J].
Janiar, Siavash Barqi ;
Wang, Ping .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (06) :8825-8834
[27]   Bandwidth Improvement for Patch Antenna via Knowledge-Based Deep Reinforcement Learning [J].
Su, Yue ;
Yin, Yifan ;
Li, Shunli ;
Zhao, Hongxin ;
Yin, Xiaoxing .
IEEE ANTENNAS AND WIRELESS PROPAGATION LETTERS, 2024, 23 (12) :4094-4098
[28]   Towards Post-disaster Damage Assessment using Deep Transfer Learning and GAN-based Data Augmentation [J].
Banerjee, Sourasekhar ;
Patel, Yashwant Singh ;
Kumar, Pushkar ;
Bhuyan, Monowar .
PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING AND NETWORKING, ICDCN 2023, 2023, :372-377
[29]   Reusing Source Task Knowledge via Transfer Approximator in Reinforcement Transfer Learning [J].
Cheng, Qiao ;
Wang, Xiangke ;
Niu, Yifeng ;
Shen, Lincheng .
SYMMETRY-BASEL, 2019, 11 (01)
[30]   An Online Search Method for Representative Risky Fault Chains Based on Reinforcement Learning and Knowledge Transfer [J].
Zhang, Zhimei ;
Yao, Rui ;
Huang, Shaowei ;
Chen, Ying ;
Mei, Shengwei ;
Sun, Kai .
IEEE TRANSACTIONS ON POWER SYSTEMS, 2020, 35 (03) :1856-1867