Strip-MLP: Efficient Token Interaction for Vision MLP

被引:7
作者
Cao, Guiping [1 ,2 ]
Luo, Shengda [1 ]
Huang, Wenjian [1 ]
Lan, Xiangyuan [2 ]
Jiang, Dongmei [2 ]
Wang, Yaowei [2 ]
Zhang, Jianguo [1 ,2 ]
机构
[1] Southern Univ Sci & Technol, Shenzhen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV | 2023年
关键词
NETWORK;
D O I
10.1109/ICCV51070.2023.00144
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Token interaction operation is one of the core modules in MLP-based models to exchange and aggregate information between different spatial locations. However, the power of token interaction on the spatial dimension is highly depen- (a) Down-sampled image. (b) Token mixing of MLP layer. dent on the spatial resolution of the feature maps, which limits the model's expressive ability, especially in deep layers where the feature are down-sampled to a small spatial size. To address this issue, we present a novel method called Strip-MLP to enrich the token interaction power in three ways. Firstly, we introduce a new MLP paradigm called Strip MLP layer that allows the token to interact with other tokens in a cross-strip manner, enabling the tokens in a row (or column) to contribute to the information aggregations in adjacent but different strips of rows (or columns). Secondly, a Cascade Group Strip Mixing Module (CGSMM) is proposed to overcome the performance degradation caused by small spatial feature size. The module allows tokens to interact more effectively in the manners of within-patch and cross-patch, which is independent to the feature spatial size. Finally, based on the Strip MLP layer, we propose a novel Local Strip Mixing Module (LSMM) to boost the token interaction power in the local region. Extensive experiments demonstrate that Strip-MLP significantly improves the performance of MLP-based models on small datasets and obtains comparable or even better results on ImageNet. In particular, Strip-MLP models achieve higher average Top-1 accuracy than existing MLP-based models by +2.44% on Caltech-101 and +2.16% on CIFAR-100. The source codes will be available at https://github.com/MedProcess/Strip MLP.
引用
收藏
页码:1494 / 1504
页数:11
相关论文
共 40 条
[31]   MAXIM: Multi-Axis MLP for Image Processing [J].
Tu, Zhengzhong ;
Talebi, Hossein ;
Zhang, Han ;
Yang, Feng ;
Milanfar, Peyman ;
Bovik, Alan ;
Li, Yinxiao .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :5759-5770
[32]  
Vaswani A, 2017, ADV NEUR IN, V30
[33]   Deep High-Resolution Representation Learning for Visual Recognition [J].
Wang, Jingdong ;
Sun, Ke ;
Cheng, Tianheng ;
Jiang, Borui ;
Deng, Chaorui ;
Zhao, Yang ;
Liu, Dong ;
Mu, Yadong ;
Tan, Mingkui ;
Wang, Xinggang ;
Liu, Wenyu ;
Xiao, Bin .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (10) :3349-3364
[34]   Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions [J].
Wang, Wenhai ;
Xie, Enze ;
Li, Xiang ;
Fan, Deng-Ping ;
Song, Kaitao ;
Liang, Ding ;
Lu, Tong ;
Luo, Ping ;
Shao, Ling .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :548-558
[35]  
Wang Ziyu, 2022, INT C MACHINE LEARNI, P22691
[36]  
Woo Sanghyun, 2023, arXiv
[37]   Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet [J].
Yuan, Li ;
Chen, Yunpeng ;
Wang, Tao ;
Yu, Weihao ;
Shi, Yujun ;
Jiang, Zihang ;
Tay, Francis E. H. ;
Feng, Jiashi ;
Yan, Shuicheng .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :538-547
[38]  
Zhang David Junhao, 2021, ARXIV211112527
[39]   ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices [J].
Zhang, Xiangyu ;
Zhou, Xinyu ;
Lin, Mengxiao ;
Sun, Ran .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6848-6856
[40]   Analysis of clinical effect of humanized nursing mode on patients with hypertensive cerebral hemorrhage [J].
Zhang, Yumei ;
Shi, Fen ;
Wang, Changqin ;
Wang, Lei .
PANMINERVA MEDICA, 2020,