Generative Adversarial Imitation Learning to Search in Branch-and-Bound Algorithms

被引:2
|
作者
Wang, Qi [1 ]
Blackley, Suzanne, V [2 ]
Tang, Chunlei [2 ]
机构
[1] Fudan Univ, Shanghai 200438, Peoples R China
[2] Harvard Med Sch, Boston, MA 02120 USA
来源
DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT II | 2022年
关键词
Combinatorial optimization; Reinforcement learning; Branch-and-bound; Generative adversarial imitation learning;
D O I
10.1007/978-3-031-00126-0_51
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have shown that reinforcement learning (RL) can provide state-of-the-art performance at learning sophisticated heuristics by exploiting the shared internal structure combinatorial optimization instances in the data. However, existing RL-based methods require too much trial-and-error reliant on sophisticated reward engineering, which is laborious and inefficient for practical applications. This paper proposes a novel framework (RAIL) that combines RL and generative adversarial imitation learning (GAIL) to meet the challenge by searching in branch-and-bound algorithms. RAIL has a policy architecture with dual decoders, corresponding to the sequence decoding of RL and the edge decoding of GAIL, respectively. The two complement each other and restrict each other to improve the learned policy and reward function iteratively.
引用
收藏
页码:673 / 680
页数:8
相关论文
共 50 条