Preserving Locality in Vision Transformers for Class Incremental Learning

被引:3
作者
Zheng, Bowen [1 ]
Zhou, Wei [1 ]
Ye, Han-Jia [1 ]
Zhan, De-Chuan [1 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing, Peoples R China
来源
2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME | 2023年
基金
国家重点研发计划;
关键词
Class Incremental Learning; Vision Transformer;
D O I
10.1109/ICME55011.2023.00202
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning new classes without forgetting is crucial for real-world applications for a classification model. Vision Transformers (ViT) recently achieve remarkable performance in Class Incremental Learning (CIL). Previous works mainly focus on block design and model expansion for ViTs. However, in this paper, we find that when the ViT is incrementally trained, the attention layers gradually lose concentration on local features. We call this interesting phenomenon as Locality Degradation in ViTs for CIL. Since the low-level local information is crucial to the transferability of the representation, it is beneficial to preserve the locality in attention layers. In this paper, we encourage the model to preserve more local information as the training procedure goes on and devise a Locality-Preserved Attention (LPA) layer to emphasize the importance of local features. Specifically, we incorporate the local information directly into the vanilla attention and control the initial gradients of the vanilla attention by weighting it with a small initial value. Extensive experiments show that the representations facilitated by LPA capture more low-level general information which is easier to transfer to follow-up tasks. The improved model gets consistently better performance on CIFAR100 and ImageNet100. The source code is available at https://github.com/bwnzheng/LPA_ICME2023.
引用
收藏
页码:1157 / 1162
页数:6
相关论文
共 50 条
[41]   Feature expansion and enhanced compression for class incremental learning [J].
Ferdinand, Quentin ;
Clement, Benoit ;
Papadakis, Panagiotis ;
Oliveau, Quentin ;
Le Chenadec, Gilles .
NEUROCOMPUTING, 2025, 634
[42]   A Broad Neural Network Structure for Class Incremental Learning [J].
Liu, Wenzhang ;
Yang, Haiqin ;
Sun, Yuewen ;
Sun, Changyin .
ADVANCES IN NEURAL NETWORKS - ISNN 2018, 2018, 10878 :229-238
[43]   DYNAMIC REPLAY TRAINING FOR CLASS-INCREMENTAL LEARNING [J].
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, :5915-5919
[44]   APM: Adaptive parameter multiplexing for class incremental learning [J].
Gao, Jinghan ;
Xie, Tao ;
Li, Ruifeng ;
Wang, Ke ;
Zhao, Lijun .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 258
[45]   Medicinal Plant Leaf Classification using Deep Learning and Vision Transformers [J].
Hossain, Shahriar ;
Hasan, Rizbanul ;
Uddin, Jia .
BAGHDAD SCIENCE JOURNAL, 2025, 22 (03) :1065-1076
[46]   Integrating self-supervised learning with vision transformers for glaucoma detection [J].
Liao, Caisheng ;
Todo, Yuki ;
Tang, Zheng .
JOURNAL OF ELECTRONIC IMAGING, 2025, 34 (02)
[47]   Improving Alzheimer's Diagnosis using Vision Transformers and Transfer Learning [J].
Zaabi, Marwa ;
Ibn Khedher, Mohamed ;
El-Yacoubi, Mounim A. .
2024 16TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION, HSI 2024, 2024,
[48]   Class incremental learning of remote sensing images based on class similarity distillation [J].
Shen, Mingge ;
Chen, Dehu ;
Hu, Silan ;
Xu, Gang .
PEERJ COMPUTER SCIENCE, 2023, 9
[49]   Multi-class Classification of Retinal Eye Diseases from Ophthalmoscopy Images Using Transfer Learning-Based Vision Transformers [J].
Cutur, Elif Setenay ;
Inan, Neslihan Gokmen .
JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2025,
[50]   A class-incremental learning approach for learning feature-compatible embeddings [J].
An, Hongchao ;
Yang, Jing ;
Zhang, Xiuhua ;
Ruan, Xiaoli ;
Wu, Yuankai ;
Li, Shaobo ;
Hu, Jianjun .
NEURAL NETWORKS, 2024, 180