STATE REPRESENTATION LEARNING FOR EFFECTIVE DEEP REINFORCEMENT LEARNING

被引:0
|
作者
Zhao, Jian [1 ]
Zhou, Wengang [1 ]
Zhao, Tianyu [1 ]
Zhou, Yun [1 ]
Li, Houqiang [1 ]
机构
[1] Univ Sci & Technol China, EEIS Dept, CAS Key Lab GIPAS, Beijing, Peoples R China
关键词
Representation learning; reinforcement learning;
D O I
10.1109/icme46284.2020.9102924
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Recent years have witnessed the great success of deep reinforcement learning (DRL) on a variety of vision games. Although DNN has demonstrated strong power in representation learning, such capacity is under-explored in most DRL works whose focus is usually on optimization solvers. In fact, we discover that the state feature learning is the main obstacle for further improvement of DRL algorithms. To address this issue, we propose a new state representation learning scheme with our Adjacent State Consistency Loss (ASC Loss). The loss is defined based on the hypothesis that there are fewer changes between adjacent states than that of far apart ones, since scenes in videos generally evolve smoothly. In this paper, we exploit ASC loss as an assistant of RL loss in the training phase to boost the state feature learning. We conduct evaluation on Atari games and MuJoCo continuous control tasks, which demonstrates that our method is superior to OpenAI baselines.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] State Representation Learning With Adjacent State Consistency Loss for Deep Reinforcement Learning
    Zhao, Tianyu
    Zhao, Jian
    Zhou, Wengang
    Zhou, Yun
    Li, Houqiang
    IEEE MULTIMEDIA, 2021, 28 (03) : 117 - 127
  • [2] For SALE: State-Action Representation Learning for Deep Reinforcement Learning
    Fujimoto, Scott
    Chang, Wei-Di
    Smith, Edward J.
    Gu, Shixiang Shane
    Precup, Doina
    Meger, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [3] A State Representation Dueling Network for Deep Reinforcement Learning
    Qiu, Haomin
    Liu, Feng
    2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 669 - 674
  • [4] State representation modeling for deep reinforcement learning based recommendation
    Liu, Feng
    Tang, Ruiming
    Li, Xutao
    Zhang, Weinan
    Ye, Yunming
    Chen, Haokun
    Guo, Huifeng
    Zhang, Yuzhou
    He, Xiuqiang
    KNOWLEDGE-BASED SYSTEMS, 2020, 205
  • [5] Deep Reinforcement Learning based Recommender System with State Representation
    Jiang, Peng
    Ma, Jiafeng
    Zhang, Jianming
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 5703 - 5707
  • [6] Unsupervised Representation Learning in Deep Reinforcement Learning: A Review
    Botteghi, Nicolo
    Poel, Mannes
    Brune, Christoph
    IEEE CONTROL SYSTEMS MAGAZINE, 2025, 45 (02): : 26 - 68
  • [7] Generalized Representation Learning Methods for Deep Reinforcement Learning
    Zhu, Hanhua
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 5216 - 5217
  • [8] Deep sparse representation via deep dictionary learning for reinforcement learning
    Tang, Jianhao
    Li, Zhenni
    Xie, Shengli
    Ding, Shuxue
    Zheng, Shaolong
    Chen, Xueni
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2398 - 2403
  • [9] Dynamic and Interpretable State Representation for Deep Reinforcement Learning in Automated Driving
    Hejase, Bilal
    Yurtsever, Ekim
    Han, Teawon
    Singh, Baljeet
    Filev, Dimitar P.
    Tseng, H. Eric
    Ozguner, Umit
    IFAC PAPERSONLINE, 2022, 55 (24): : 129 - 134
  • [10] Multiple Frequency Bands Temporal State Representation for Deep Reinforcement Learning
    Wang, Che
    Hu, Jifeng
    Song, Fuhu
    Huang, Jiao
    Yang, Zixuan
    Wang, Yusen
    2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023, 2023, : 309 - 315