ReViT: Enhancing vision transformers feature diversity with attention residual connections

被引:9
作者
Diko, Anxhelo [1 ]
Avola, Danilo [1 ]
Cascio, Marco [1 ,2 ]
Cinque, Luigi [1 ]
机构
[1] Sapienza Univ Rome, Dept Comp Sci, Via Salaria 113, I-00198 Rome, Italy
[2] Univ Rome UnitelmaSapienza, Dept Law & Econ, Piazza Sassari 4, I-00161 Rome, Italy
关键词
Vision transformer; Feature collapse; Self-attention mechanism; Residual attention learning; Visual recognition;
D O I
10.1016/j.patcog.2024.110853
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision Transformer (ViT) self-attention mechanism is characterized by feature collapse in deeper layers, resulting in the vanishing of low-level visual features. However, such features can be helpful to accurately represent and identify elements within an image and increase the accuracy and robustness of vision-based recognition systems. Following this rationale, we propose a novel residual attention learning method for improving ViT-based architectures, increasing their visual feature diversity and model robustness. In this way, the proposed network can capture and preserve significant low-level features, providing more details about the elements within the scene being analyzed. The effectiveness and robustness of the presented method are evaluated on five image classification benchmarks, including ImageNet1k, CIFAR10, CIFAR100, Oxford Flowers-102, and Oxford-IIIT Pet, achieving improved performances. Additionally, experiments on the COCO2017 dataset show that the devised approach discovers and incorporates semantic and spatial relationships for object detection and instance segmentation when implemented into spatial-aware transformer models.
引用
收藏
页数:13
相关论文
共 45 条
[1]   Review of deep learning: concepts, CNN architectures, challenges, applications, future directions [J].
Alzubaidi, Laith ;
Zhang, Jinglan ;
Humaidi, Amjad J. ;
Al-Dujaili, Ayad ;
Duan, Ye ;
Al-Shamma, Omran ;
Santamaria, J. ;
Fadhel, Mohammed A. ;
Al-Amidie, Muthana ;
Farhan, Laith .
JOURNAL OF BIG DATA, 2021, 8 (01)
[2]   Cascade R-CNN: Delving into High Quality Object Detection [J].
Cai, Zhaowei ;
Vasconcelos, Nuno .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6154-6162
[3]   SWIPENET: Object detection in noisy underwater scenes [J].
Chen, Long ;
Zhou, Feixiang ;
Wang, Shengke ;
Dong, Junyu ;
Li, Ning ;
Ma, Haiping ;
Wang, Xin ;
Zhou, Huiyu .
PATTERN RECOGNITION, 2022, 132
[4]   MixFormer: Mixing Features acrossWindows and Dimensions [J].
Chen, Qiang ;
Wu, Qiman ;
Wang, Jian ;
Hu, Qinghao ;
Hu, Tao ;
Ding, Errui ;
Cheng, Jian ;
Wang, Jingdong .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :5239-5249
[5]  
Chu XX, 2021, ADV NEUR IN
[6]   ConViT: improving vision transformers with soft convolutional inductive biases [J].
d'Ascoli, Stephane ;
Touvron, Hugo ;
Leavitt, Matthew L. ;
Morcos, Ari S. ;
Biroli, Giulio ;
Sagun, Levent .
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2022, 2022 (11)
[7]  
Dai Z, 2021, ADV NEUR IN, V34
[8]  
Dosovitskiy A., 2021, 9 INT C LEARN REPR I
[9]   Transformer-based visual object tracking via fine-coarse concatenated attention and cross concatenated MLP [J].
Gao, Long ;
Chen, Langkun ;
Liu, Pan ;
Jiang, Yan ;
Li, Yunsong ;
Ning, Jifeng .
PATTERN RECOGNITION, 2024, 146
[10]   IMAGEBIND: One Embedding Space To Bind Them All [J].
Girdhar, Rohit ;
El-Nouby, Alaaeldin ;
Liu, Zhuang ;
Singh, Mannat ;
Alwala, Kalyan Vasudev ;
Joulin, Armand ;
Misra, Ishan .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :15180-15190