Attentive Multi-task Deep Reinforcement Learning

被引:4
|
作者
Bram, Timo [1 ]
Brunner, Gino [1 ]
Richter, Oliver [1 ]
Wattenhofer, Roger [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Informat Technol & Elect Engn, Zurich, Switzerland
关键词
D O I
10.1007/978-3-030-46133-1_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task/transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
引用
收藏
页码:134 / 149
页数:16
相关论文
共 50 条
  • [41] Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability
    Omidshafiei, Shayegan
    Pazis, Jason
    Amato, Christopher
    How, Jonathan P.
    Vian, John
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [42] Multi-task deep reinforcement learning for intelligent multi-zone residential HVAC control
    Du, Yan
    Li, Fangxing
    Munk, Jeffrey
    Kurte, Kuldeep
    Kotevska, Olivera
    Amasyali, Kadir
    Zandi, Helia
    ELECTRIC POWER SYSTEMS RESEARCH, 2021, 192
  • [43] Learning potential functions and their representations for multi-task reinforcement learning
    Matthijs Snel
    Shimon Whiteson
    Autonomous Agents and Multi-Agent Systems, 2014, 28 : 637 - 681
  • [44] Learning potential functions and their representations for multi-task reinforcement learning
    Snel, Matthijs
    Whiteson, Shimon
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2014, 28 (04) : 637 - 681
  • [45] Multi-Task Multi-Agent Reinforcement Learning With Interaction and Task Representations
    Li, Chao
    Dong, Shaokang
    Yang, Shangdong
    Hu, Yujing
    Ding, Tianyu
    Li, Wenbin
    Gao, Yang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [46] Multi-Agent Deep Reinforcement Learning Based Incentive Mechanism for Multi-Task Federated Edge Learning
    Zhao, Nan
    Pei, Yiyang
    Liang, Ying-Chang
    Niyato, Dusit
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (10) : 13530 - 13535
  • [47] Expert Level Control of Ramp Metering Based on Multi-Task Deep Reinforcement Learning
    Belletti, Francois
    Haziza, Daniel
    Gomes, Gabriel
    Bayen, Alexandre M.
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2018, 19 (04) : 1198 - 1207
  • [48] Joint bidding and pricing for electricity retailers based on multi-task deep reinforcement learning
    Xu, Hongsheng
    Wu, Qiuwei
    Wen, Jinyu
    Yang, Zhihong
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2022, 138
  • [49] Multi-task Hierarchical Adversarial Inverse Reinforcement Learning
    Chen, Jiayu
    Tamboli, Dipesh
    Lan, Tian
    Aggarwal, Vaneet
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [50] Episodic memory transfer for multi-task reinforcement learning
    Sorokin, Artyom Y.
    Burtsev, Mikhail S.
    BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, 2018, 26 : 91 - 95