MSANet: Multi-scale attention networks for image classification

被引:5
作者
Cao, Ping [1 ,2 ]
Xie, Fangxin [1 ]
Zhang, Shichao [1 ]
Zhang, Zuping [1 ]
Zhang, Jianfeng [3 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Changsha, Peoples R China
[2] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing, Peoples R China
[3] Natl Univ Def Technol, Coll Comp Sci, Changsha, Peoples R China
基金
中国国家自然科学基金;
关键词
Image classification; Convolutional neural network; Multi-scale feature; Channel attention; Spatial attention; TEXTURE; SCALE;
D O I
10.1007/s11042-022-12792-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The classification of images based on the principles of human vision is a major task in the field of computer vision. It is a common method to use multi-scale information and attention mechanism to obtain better classification performance. The methods based on multi-scale can obtain more accurate feature description by fusing different levels of information, and the methods based on attention can make the deep learning models focus on more valuable information in the image. However, the current methods usually treat the acquisition of multi-scale feature maps and the acquisition of attention weights as two separate steps in sequence. Since human eyes usually use these two methods at the same time when observing objects, we propose a multi-scale attention (MSA) module. The proposed MSA module directly extracts the attention information of different scales from a feature map, that is, the multi-scale and attention methods are simultaneously completed in one step. In the MSA module, we obtain different scales of channel and spatial attention by controlling the size of the convolution kernel for cross-channel and cross-space information interaction. Our module can be easily integrated into different convolutional neural networks to form Multi-scale attention networks (MSANet) architectures. We demonstrate the performance of MSANet on CIFAR-10 and CIFAR-100 data sets. In particular, the accuracy of our ResNet-110 based model on CIFAR-10 is 94.39%. Compared with the benchmark convolution model, our proposed multi-scale attention module can bring a roughly 3% increase in accuracy rate on CIFAR-100. Experimental results show that the proposed multi-scale attention module is superior in image classification.
引用
收藏
页码:34325 / 34344
页数:20
相关论文
共 58 条
[1]  
Adelson E.H., 1984, RCA Eng., V29, P33
[2]  
Ali A, 2021, MULTIMED TOOLS APPL, P133
[3]   Exploiting dynamic spatio-temporal correlations for citywide traffic flow prediction using attention based neural networks [J].
Ali, Ahmad ;
Zhu, Yanmin ;
Zakarya, Muhammad .
INFORMATION SCIENCES, 2021, 577 :852-870
[4]   Leveraging Spatio-Temporal Patterns for Predicting Citywide Traffic Crowd Flows Using Deep Hybrid Neural Networks [J].
Ali, Ahmad ;
Zhu, Yanmin ;
Chen, Qiuxia ;
Yu, Jiadi ;
Cai, Haibin .
2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, :125-132
[5]  
[Anonymous], 2018, PROC IEEECVF C COMPU
[6]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[7]   New image descriptors based on color, texture, shape, and wavelets for object and scene image classification [J].
Banerji, Sugata ;
Sinha, Atreyee ;
Liu, Chengjun .
NEUROCOMPUTING, 2013, 117 :173-185
[8]  
Bay H., 2006, ECCV, P404417
[9]   AUTO-ASSOCIATION BY MULTILAYER PERCEPTRONS AND SINGULAR VALUE DECOMPOSITION [J].
BOURLARD, H ;
KAMP, Y .
BIOLOGICAL CYBERNETICS, 1988, 59 (4-5) :291-294
[10]  
Bramberger M, 2004, P RTAS 2004 10 IEEE