Leveraging Reinforcement Learning Techniques for Effective Policy Adoption and Validation

被引:0
作者
Kuang, Nikki Lijing [1 ]
Leung, Clement H. C. [2 ]
机构
[1] Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA 92093 USA
[2] Chinese Univ Hong Kong, Sch Sci & Engn, Shenzhen, Peoples R China
来源
COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2019, PT II: 19TH INTERNATIONAL CONFERENCE, SAINT PETERSBURG, RUSSIA, JULY 1-4, 2019, PROCEEDINGS, PART II | 2019年 / 11620卷
关键词
Autonomous agent; Aviation safety; Decision rules; Multi-agent; Reinforcement learning; Stopping rules; COMPREHENSIVE SURVEY;
D O I
10.1007/978-3-030-24296-1_26
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Rewards and punishments in different forms are pervasive and present in a wide variety of decision-making scenarios. By observing the outcome of a sufficient number of repeated trials, one would gradually learn the value and usefulness of a particular policy or strategy. However, in a given environment, the outcomes resulting from different trials are subject to chance influence and variations. In learning about the usefulness of a given policy, significant costs are involved in systematically undertaking the sequential trials; therefore, in most learning episodes, one would wish to keep the cost within bounds by adopting learning stopping rules. In this paper, we examine the deployment of different stopping strategies in given learning environments which vary from highly stringent for mission critical operations to highly tolerant for non-mission critical operations, and emphasis is placed on the former with particular application to aviation safety. In policy evaluation, two sequential phases of learning are identified, and we describe the outcomes variations using a probabilistic model, with closed-form expressions obtained for the key measures of performance. Decision rules that map the trial observations to policy choices are also formulated. In addition, simulation experiments are performed, which corroborate the validity of the theoretical results.
引用
收藏
页码:311 / 322
页数:12
相关论文
共 24 条
[1]   Autonomous agents modelling other agents: A comprehensive survey and open problems [J].
Albrecht, Stefano V. ;
Stone, Peter .
ARTIFICIAL INTELLIGENCE, 2018, 258 :66-95
[2]  
[Anonymous], 2008, AAAI
[3]   Implicit concept-based image indexing and retrieval [J].
Azzam, IA ;
Leung, CHC ;
Horwood, JF .
10TH INTERNATIONAL MULTIMEDIA MODELLING CONFERENCE, PROCEEDINGS, 2004, :354-359
[4]   R-MAX - A general polynomial time algorithm for near-optimal reinforcement learning [J].
Brafman, RI ;
Tennenholtz, M .
JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (02) :213-231
[5]  
Brochu E., 2010, ARXIV10122599
[6]   A comprehensive survey of multiagent reinforcement learning [J].
Busoniu, Lucian ;
Babuska, Robert ;
De Schutter, Bart .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2008, 38 (02) :156-172
[7]   Dynamic Time Warping for Music Retrieval Using Time Series Modeling of Musical Emotions [J].
Deng, James J. ;
Leung, Clement H. C. .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2015, 6 (02) :137-151
[8]  
Feller W., 2008, INTRO PROBABILITY TH, V1
[9]   Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES) [J].
Hansen, N ;
Muller, SD ;
Koumoutsakos, P .
EVOLUTIONARY COMPUTATION, 2003, 11 (01) :1-18
[10]  
Ipek E., 2008, ACM SIGARCH COMPUTER, V36