Multi-Manifold Attention for Vision Transformers

被引:3
作者
Konstantinidis, Dimitrios [1 ]
Papastratis, Ilias [1 ]
Dimitropoulos, Kosmas [1 ]
Daras, Petros [1 ]
机构
[1] Ctr Res & Technol Hellas CERTH, Informat Technol Inst, Thessaloniki 57001, Greece
关键词
Transformers; Manifolds; Covariance matrices; Task analysis; Computational modeling; Image classification; Computer vision; Semantic segmentation; Attention; manifold; vision transformers; image classification; semantic segmentation;
D O I
10.1109/ACCESS.2023.3329952
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vision Transformers are very popular nowadays due to their state-of-the-art performance in several computer vision tasks, such as image classification and action recognition. Although their performance has been greatly enhanced through highly descriptive patch embeddings and hierarchical structures, there is still limited research on utilizing additional data representations so as to refine the self-attention map of a Transformer. To address this problem, a novel attention mechanism, called multi-manifold multi-head attention, is proposed in this work to substitute the vanilla self-attention of a Transformer. The proposed mechanism models the input space in three distinct manifolds, namely Euclidean, Symmetric Positive Definite and Grassmann, thus leveraging different statistical and geometrical properties of the input for the computation of a highly descriptive attention map. In this way, the proposed attention mechanism can guide a Vision Transformer to become more attentive towards important appearance, color and texture features of an image, leading to improved classification and segmentation results, as shown by the experimental results on well-known datasets.
引用
收藏
页码:123433 / 123444
页数:12
相关论文
共 67 条
  • [1] Bhattacharya S, 2016, Arxiv, DOI arXiv:1606.05355
  • [2] Bjorck A., 1967, BIT, V7, P1, DOI DOI 10.1007/BF01934122
  • [3] Choromanski K., 2020, RETHINKING ATTENTION
  • [4] Chu L, 2022, Arxiv, DOI arXiv:2201.08962
  • [5] Chu XX, 2021, ADV NEUR IN
  • [6] Chu XX, 2021, Arxiv, DOI arXiv:2102.10882
  • [7] AutoAugment: Learning Augmentation Strategies from Data
    Cubuk, Ekin D.
    Zoph, Barret
    Mane, Dandelion
    Vasudevan, Vijay
    Le, Quoc V.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 113 - 123
  • [8] ConViT: improving vision transformers with soft convolutional inductive biases
    d'Ascoli, Stephane
    Touvron, Hugo
    Leavitt, Matthew L.
    Morcos, Ari S.
    Biroli, Giulio
    Sagun, Levent
    [J]. JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2022, 2022 (11):
  • [9] Classification of Multidimensional Time-Evolving Data Using Histograms of Grassmannian Points
    Dimitropoulos, Kosmas
    Barmpoutis, Panagiotis
    Kitsikidis, Alexandros
    Grammalidis, Nikos
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (04) : 892 - 905
  • [10] Grading of invasive breast carcinoma through Grassmannian VLAD encoding
    Dimitropoulos, Kosmas
    Barmpoutis, Panagiotis
    Zioga, Christina
    Kamas, Athanasios
    Patsiaoura, Kalliopi
    Grammalidis, Nikos
    [J]. PLOS ONE, 2017, 12 (09):