An efficient and robust gradient reinforcement learning: Deep comparative policy

被引:1
作者
Wang, Jiaguo [1 ]
Li, Wenheng [2 ]
Lei, Chao [3 ]
Yang, Meng [4 ]
Pei, Yang [1 ]
机构
[1] Northwestern Polytech Univ, Xian, Peoples R China
[2] AVIC Xian Aeronaut Comp Tech Res Inst, Xian, Peoples R China
[3] Univ Melbourne, Sch Comp & Informat Syst, Parkville, Vic, Australia
[4] Monash Univ, Fac Informat Technol, Clayton, Vic, Australia
基金
中国国家自然科学基金;
关键词
Actor-critic; deep reinforcement learning; intelligent agent; iterative learning; GAME; GO;
D O I
10.3233/JIFS-233747
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, actor-critic architectures such as deep deterministic policy gradient (DDPG) are able to understand higher-level concepts for searching rich reward, and generate complex actions in continuous action space, and widely used in practical applications. However, when action space is limited and has dynamic hard margins, training DDPG can be problematic and inefficiency. Since real-world actuators always have margins and interferences, after initialization, the actor network is likely to be stuck at a local optimal point on action space margin: actor gradient orients to the outside of action space but actuators stop at the margin. If the hard margins are complex, dynamic and unknown to the DDPG agent, it is unable to use penalty functions to recover from local optimum. If we enlarge the random process for local exploration, the training could be in potential risk of failure. Therefore, simply relying on gradient of critic network to train the actor network is not a robust method in real environment. To solve this problem, in this paper we modify DDPG to deep comparative policy (DCP). Rather than leveraging critic-to-actor gradient, the core training process of DCP is regulated by a T-fold compare among random proposed adjacent actions. The performance of DDPG, DCP and related algorithms are tested and compared in two experiments. Our results show that, DCP is effective, efficient and qualified to perform all tasks that DDPG can perform. More importantly, DCP is less likely to be influenced by the action space margins, DCP can provide more safety in avoiding training failure and local optimum, and gain more robustness in applications with dynamic hard margins in the action space. Another advantage is that, complex penalty for margin touching detection is not required, the reward function can always be brief and short.
引用
收藏
页码:3773 / 3788
页数:16
相关论文
共 34 条
[1]  
Antonietti A, 2018, P IEEE RAS-EMBS INT, P142, DOI 10.1109/BIOROB.2018.8487202
[2]  
Babaeizadeh M, 2017, Arxiv, DOI arXiv:1611.06256
[3]  
Bellemare MG, 2017, PR MACH LEARN RES, V70
[4]  
Brown N., 2017, arXiv
[5]  
Williams JD, 2017, Arxiv, DOI arXiv:1702.03274
[6]  
Dhingra B, 2017, Arxiv, DOI arXiv:1609.00777
[7]  
Duan Y, 2016, PR MACH LEARN RES, V48
[8]  
Fortunato M, 2019, Arxiv, DOI arXiv:1706.10295
[9]  
Goedhart M., 2018, 2018 AIAA INFORM SYS
[10]   Applying Deep Learning To Airbnb Search [J].
Haldar, Malay ;
Abdool, Mustafa ;
Ramanathan, Prashant ;
Xu, Tao ;
Yang, Shulin ;
Duan, Huizhong ;
Zhang, Qing ;
Barrow-Williams, Nick ;
Turnbull, Bradley C. ;
Collins, Brendan M. ;
Legrand, Thomas .
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, :1927-1935