Active one-shot learning by a deep Q-network strategy

被引:8
作者
Chen, Li [1 ]
Huang, Honglan [1 ]
Feng, Yanghe [1 ]
Cheng, Guangquan [1 ]
Huang, Jincai [1 ]
Liu, Zhong [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha, Peoples R China
基金
中国国家自然科学基金;
关键词
One-shot learning; A deep Q-network strategy; Active-learning set-up;
D O I
10.1016/j.neucom.2019.11.017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
One-shot learning has recently attracted growing attention to produce models which can classify significant events from a few or even no labeled examples. In this paper, we introduce a deep Q-network strategy into one-shot learning (OL-DQN) to design a more intelligent learner to infer whether to label a sample automatically or request the true label for the active-learning set-up. Then we conducted experiments in the ALOI dataset for the classification of objects recorded under various imaging circumstances and a dataset for handwriting recognition composed of both characters and digits to have a performance evaluation and application analysis of the proposed model respectively, and the obtained results demonstrate that our model can achieve a better trade-off between prediction accuracy and the need of label requests compared with a purely supervised task, a prior work AOL, and a conventional active learning algorithm QBC. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:324 / 335
页数:12
相关论文
共 40 条
[1]  
Aceto G, 2018, 2018 NETWORK TRAFFIC MEASUREMENT AND ANALYSIS CONFERENCE (TMA)
[2]  
[Anonymous], 2001, INT C MACH LEARN
[3]  
[Anonymous], PROGR NEURAL NEURAL
[4]  
[Anonymous], P AAAI
[5]  
[Anonymous], 2016, P 33 INT C MACH LEAR
[6]  
[Anonymous], P COMP SCI
[7]  
[Anonymous], 1992, P INT C LEARN REPR I
[8]  
[Anonymous], IEEE T NETW SERV MAN
[9]  
Bertinetto L, 2016, ADV NEUR IN, V29
[10]  
COHN D, 1994, MACH LEARN, V15, P201, DOI 10.1007/BF00993277