Dynamic Task Subspace Ensemble for Class-Incremental Learning

被引:0
作者
Zhang, Weile [1 ]
He, Yuanjian [1 ]
Cong, Yulai [1 ]
机构
[1] Sun Yat Sen Univ, Shenzhen 518107, Peoples R China
来源
ARTIFICIAL INTELLIGENCE, CICAI 2023, PT II | 2024年 / 14474卷
关键词
class-incremental learning; inter-task confusion; dynamic task subspace ensemble; memory-efficient;
D O I
10.1007/978-981-99-9119-8_29
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep-learning models are expected to continually learn new concepts without forgetting old ones in real-world applications with shifting data distributions. However, the notorious catastrophic forgetting often occurs. Recently, methods based on task subspace modeling have been developed to address this issue by gradually adding new subspaces to learn new concepts. In this paper, we reveal that such task-subspace-modeling methods may suffer from the inter-task confusion issue, leading to degraded performance in challenging class-incremental learning settings. Concerning addressing the forgetting issue of deep learning models, we propose a two-stage framework called Dynamic tAsk Subspace Ensemble (DASE), the first stage of which involves the dynamic expansion of the extractor network for memory efficiency, while the second stage delivers dynamic learning and aggregation of diverse features. To further enhance the discriminative capacity of the aggregated features for both historical and new classes, we also introduce new feature-enhancement techniques. Experimental results demonstrate that our method achieves state-of-the-art CIL performance on natural image datasets (CIFAR-100 and ImageNet) and Synthetic Aperture Radar image datasets (MSTAR and OpenSARShip).
引用
收藏
页码:322 / 334
页数:13
相关论文
共 35 条
[1]  
Buzzega P., 2020, Advances in neural information processing systems, V33, P15920
[2]  
Chaudhry A., 2020, Advances in Neural Information Processing Systems, V33, P9900
[3]  
Cong YL, 2020, ADV NEUR IN, V33
[4]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[5]   DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion [J].
Douillard, Arthur ;
Rame, Alexandre ;
Couairon, Guillaume ;
Cord, Matthieu .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :9275-9285
[6]  
Hinton G., NIPS 2014 DEEP LEARN
[7]   Learning a Unified Classifier Incrementally via Rebalancing [J].
Hou, Saihui ;
Pan, Xinyu ;
Loy, Chen Change ;
Wang, Zilei ;
Lin, Dahua .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :831-839
[8]   OpenSARShip: A Dataset Dedicated to Sentinel-1 Ship Interpretation [J].
Huang, Lanqing ;
Liu, Bin ;
Li, Boying ;
Guo, Weiwei ;
Yu, Wenhao ;
Zhang, Zenghui ;
Yu, Wenxian .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2018, 11 (01) :195-208
[9]   Memory-Efficient Incremental Learning Through Feature Adaptation [J].
Iscen, Ahmet ;
Zhang, Jeffrey ;
Lazebnik, Svetlana ;
Schmid, Cordelia .
COMPUTER VISION - ECCV 2020, PT XVI, 2020, 12361 :699-715
[10]   Energy-based Latent Aligner for Incremental Learning [J].
Joseph, K. J. ;
Khan, Salman ;
Khan, Fahad Shahbaz ;
Anwer, Rao Muhammad ;
Balasubramanian, Vineeth N. .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :7442-7451