Attentive Multi-task Deep Reinforcement Learning

被引:4
|
作者
Bram, Timo [1 ]
Brunner, Gino [1 ]
Richter, Oliver [1 ]
Wattenhofer, Roger [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Informat Technol & Elect Engn, Zurich, Switzerland
关键词
D O I
10.1007/978-3-030-46133-1_9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task/transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.
引用
收藏
页码:134 / 149
页数:16
相关论文
共 50 条
  • [21] Knowledge Transfer in Multi-Task Deep Reinforcement Learning for Continuous Control
    Xu, Zhiyuan
    Wu, Kun
    Che, Zhengping
    Tang, Jian
    Ye, Jieping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [22] Decision making on robot with multi-task using deep reinforcement learning for each task
    Shimoguchi, Yuya
    Kurashige, Kentarou
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 3460 - 3465
  • [23] Computational task offloading algorithm based on deep reinforcement learning and multi-task dependency
    Zhang, Xiaoqi
    Lin, Tengxiang
    Lin, Cheng-Kuan
    Chen, Zhen
    Cheng, Hongju
    THEORETICAL COMPUTER SCIENCE, 2024, 993
  • [24] Unsupervised Task Clustering for Multi-task Reinforcement Learning
    Ackermann, Johannes
    Richter, Oliver
    Wattenhofer, Roger
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, 2021, 12975 : 222 - 237
  • [25] Multi-modal Sentiment and Emotion Joint Analysis with a Deep Attentive Multi-task Learning Model
    Zhang, Yazhou
    Rong, Lu
    Li, Xiang
    Chen, Rui
    ADVANCES IN INFORMATION RETRIEVAL, PT I, 2022, 13185 : 518 - 532
  • [26] Multi-Asset Market Making via Multi-Task Deep Reinforcement Learning
    Haider, Abbas
    Hawe, Glenn, I
    Wang, Hui
    Scotney, Bryan
    MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE (LOD 2021), PT II, 2022, 13164 : 353 - 364
  • [27] Co-Attentive Multi-Task Learning for Explainable Recommendation
    Chen, Zhongxia
    Wang, Xiting
    Xie, Xing
    Wu, Tong
    Bu, Guoqing
    Wang, Yining
    Chen, Enhong
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2137 - 2143
  • [28] Multi-Task Decouple Learning With Hierarchical Attentive Point Process
    Wu, Weichang
    Zhang, Xiaolu
    Zhao, Shiwan
    Fu, Chilin
    Zhou, Jun
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (04) : 1741 - 1757
  • [29] Multi-task Batch Reinforcement Learning with Metric Learning
    Li, Jiachen
    Quan Vuong
    Liu, Shuang
    Liu, Minghua
    Ciosek, Kamil
    Christensen, Henrik
    Su, Hao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [30] Multi-Task Reinforcement Learning with Soft Modularization
    Yang, Ruihan
    Xu, Huazhe
    Wu, Yi
    Wang, Xiaolong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33