Learning to learn for few-shot continual active learning

被引:4
作者
Ho, Stella [1 ,2 ]
Liu, Ming [1 ]
Gao, Shang [1 ]
Gao, Longxiang [3 ]
机构
[1] Deakin Univ, Sch Informat Technol, Burwood, Vic 3125, Australia
[2] Univ Melbourne, Dept Biomed Engn, Parkville, Vic 2503, Australia
[3] Qilu Univ Technol, Comp Sci Ctr, Jinan 250353, Shandong, Peoples R China
关键词
Continual learning; Meta-learning; Active learning; Few-shot learning; Text classification;
D O I
10.1007/s10462-024-10924-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning strives to ensure stability in solving previously seen tasks while demonstrating plasticity in a novel domain. Recent advances in continual learning are mostly confined to a supervised learning setting, especially in NLP domain. In this work, we consider a few-shot continual active learning setting where labeled data are inadequate, and unlabeled data are abundant but with a limited annotation budget. We exploit meta-learning and propose a method, called Meta-Continual Active Learning. This method sequentially queries the most informative examples from a pool of unlabeled data for annotation to enhance task-specific performance and tackles continual learning problems through a meta-objective. Specifically, we employ meta-learning and experience replay to address inter-task confusion and catastrophic forgetting. We further incorporate textual augmentations to avoid memory over-fitting caused by experience replay and sample queries, thereby ensuring generalization. We conduct extensive experiments on benchmark text classification datasets from diverse domains to validate the feasibility and effectiveness of meta-continual active learning. We also analyze the impact of different active learning strategies on various meta continual learning models. The experimental results demonstrate that introducing randomness into sample selection is the best default strategy for maintaining generalization in meta-continual learning framework.
引用
收藏
页数:21
相关论文
共 44 条
[1]  
Adel Tameem, 2020, 8 INT C LEARN REPR I
[2]  
Andrychowicz M, 2016, ADV NEUR IN, V29
[3]  
Ayub A, 2022, ADV NEUR IN
[4]  
Bachman P, 2014, ADV NEUR IN, V27
[5]   Learning to Continually Learn [J].
Beaulieu, Shawn ;
Frati, Lapo ;
Miconi, Thomas ;
Lehman, Joel ;
Stanley, Kenneth O. ;
Clune, Jeff ;
Cheney, Nick .
ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 :992-1001
[6]  
Chaudhry A., 2019, arXiv, DOI 10.48550/arXiv.1902.10486
[7]  
Chaudhry Arslan, 2019, 7 INT C LEARN REPR I, P2
[8]  
Chen XD, 2023, PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, P7409
[9]  
Culotta A., 2005, AAAI 2005, V5, P746
[10]  
d'Autume CD, 2019, ADV NEUR IN, V32