Feature Selection Method Using Multi-Agent Reinforcement Learning Based on Guide Agents

被引:5
作者
Kim, Minwoo [1 ,2 ]
Bae, Jinhee [3 ]
Wang, Bohyun [1 ]
Ko, Hansol [1 ]
Lim, Joon S. [1 ]
机构
[1] Gachon Univ, Dept Comp Sci, Seongnam Si 13557, Gyeonggi Do, South Korea
[2] MEZOO Co Ltd, R&D Ctr 2, AI Team, Gieopdosi Ro 200,Jijeong Myeon, Wonju 26354, Gangwon Do, South Korea
[3] Univ Southern Calif, Dept Comp Sci, Los Angeles, CA 90007 USA
基金
新加坡国家研究基金会;
关键词
feature selection; guide agents; main agents; multi-agent; reinforcement learning (RL); rewards; ALGORITHM;
D O I
10.3390/s23010098
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In this study, we propose a method to automatically find features from a dataset that are effective for classification or prediction, using a new method called multi-agent reinforcement learning and a guide agent. Each feature of the dataset has one of the main and guide agents, and these agents decide whether to select a feature. Main agents select the optimal features, and guide agents present the criteria for judging the main agents' actions. After obtaining the main and guide rewards for the features selected by the agents, the main agent that behaves differently from the guide agent updates their Q-values by calculating the learning reward delivered to the main agents. The behavior comparison helps the main agent decide whether its own behavior is correct, without using other algorithms. After performing this process for each episode, the features are finally selected. The feature selection method proposed in this study uses multiple agents, reducing the number of actions each agent can perform and finding optimal features effectively and quickly. Finally, comparative experimental results on multiple datasets show that the proposed method can select effective features for classification and increase classification accuracy.
引用
收藏
页数:14
相关论文
共 34 条
[1]   Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays [J].
Alon, U ;
Barkai, N ;
Notterman, DA ;
Gish, K ;
Ybarra, S ;
Mack, D ;
Levine, AJ .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1999, 96 (12) :6745-6750
[2]  
[Anonymous], 2014, INT J ELECT COMMUNIC
[3]  
Asuncion A., 2007, Uci machine learning repository
[4]   Application of high-dimensional feature selection: evaluation for genomic prediction in man [J].
Bermingham, M. L. ;
Pong-Wong, R. ;
Spiliopoulou, A. ;
Hayward, C. ;
Rudan, I. ;
Campbell, H. ;
Wright, A. F. ;
Wilson, J. F. ;
Agakov, F. ;
Navarro, P. ;
Haley, C. S. .
SCIENTIFIC REPORTS, 2015, 5
[5]   Reinforcement Learning, Fast and Slow [J].
Botvinick, Matthew ;
Ritter, Sam ;
Wang, Jane X. ;
Kurth-Nelson, Zeb ;
Blundell, Charles ;
Hassabis, Demis .
TRENDS IN COGNITIVE SCIENCES, 2019, 23 (05) :408-422
[6]  
Bousquet O, 2004, LECT NOTES ARTIF INT, V3176, P169
[7]   Feature selection in machine learning: A new perspective [J].
Cai, Jie ;
Luo, Jiawei ;
Wang, Shulin ;
Yang, Sheng .
NEUROCOMPUTING, 2018, 300 :70-79
[8]   A fast hybrid reinforcement learning framework with human corrective feedback [J].
Celemin, Carlos ;
Ruiz-del-Solar, Javier ;
Kober, Jens .
AUTONOMOUS ROBOTS, 2019, 43 (05) :1173-1186
[9]   Using recursive feature elimination in random forest to account for correlated variables in high dimensional data [J].
Darst, Burcu F. ;
Malecki, Kristen C. ;
Engelman, Corinne D. .
BMC GENETICS, 2018, 19
[10]  
Fan W, 2020, Arxiv, DOI arXiv:2008.12001