Strip-MLP: Efficient Token Interaction for Vision MLP

被引:7
作者
Cao, Guiping [1 ,2 ]
Luo, Shengda [1 ]
Huang, Wenjian [1 ]
Lan, Xiangyuan [2 ]
Jiang, Dongmei [2 ]
Wang, Yaowei [2 ]
Zhang, Jianguo [1 ,2 ]
机构
[1] Southern Univ Sci & Technol, Shenzhen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV | 2023年
关键词
NETWORK;
D O I
10.1109/ICCV51070.2023.00144
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Token interaction operation is one of the core modules in MLP-based models to exchange and aggregate information between different spatial locations. However, the power of token interaction on the spatial dimension is highly depen- (a) Down-sampled image. (b) Token mixing of MLP layer. dent on the spatial resolution of the feature maps, which limits the model's expressive ability, especially in deep layers where the feature are down-sampled to a small spatial size. To address this issue, we present a novel method called Strip-MLP to enrich the token interaction power in three ways. Firstly, we introduce a new MLP paradigm called Strip MLP layer that allows the token to interact with other tokens in a cross-strip manner, enabling the tokens in a row (or column) to contribute to the information aggregations in adjacent but different strips of rows (or columns). Secondly, a Cascade Group Strip Mixing Module (CGSMM) is proposed to overcome the performance degradation caused by small spatial feature size. The module allows tokens to interact more effectively in the manners of within-patch and cross-patch, which is independent to the feature spatial size. Finally, based on the Strip MLP layer, we propose a novel Local Strip Mixing Module (LSMM) to boost the token interaction power in the local region. Extensive experiments demonstrate that Strip-MLP significantly improves the performance of MLP-based models on small datasets and obtains comparable or even better results on ImageNet. In particular, Strip-MLP models achieve higher average Top-1 accuracy than existing MLP-based models by +2.44% on Caltech-101 and +2.16% on CIFAR-100. The source codes will be available at https://github.com/MedProcess/Strip MLP.
引用
收藏
页码:1494 / 1504
页数:11
相关论文
共 40 条
[1]   Contribution of diabetes mellitus to periodontal inflammation during orthodontic tooth movement [J].
Chen, Shuo ;
Huang, Danyuan ;
Zhu, Li ;
Jiang, Yukun ;
Guan, Yuzhe ;
Zou, Shujuan ;
Li, Yuyu .
ORAL DISEASES, 2024, 30 (02) :650-659
[2]  
CHOLLET F, 2017, PROC CVPR IEEE, P1800, DOI [DOI 10.1109/CVPR.2017.195, 10.1109/CVPR.2017.195]
[3]   Histograms of oriented gradients for human detection [J].
Dalal, N ;
Triggs, B .
2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, :886-893
[4]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[5]  
Dosovitskiy Alexey, 2000, INT C LEARN REPR
[6]   Hire-MLP: Vision MLP via Hierarchical Rearrangement [J].
Guo, Jianyuan ;
Tang, Yehui ;
Han, Kai ;
Chen, Xinghao ;
Wu, Han ;
Xu, Chao ;
Xu, Chang ;
Wang, Yunhe .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :816-826
[7]  
He K, 2016, Proceedings of the IEEE conference on computer vision and pattern recognition, DOI [DOI 10.1109/CVPR.2016.90, 10.1109/CVPR.2016.90]
[8]  
Hendrycks D., 2016, ARXIV
[9]   Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition [J].
Hou, Qibin ;
Jiang, Zihang ;
Yuan, Li ;
Cheng, Ming-Ming ;
Yan, Shuicheng ;
Feng, Jiashi .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) :1328-1334
[10]  
Howard AG, 2017, ARXIV