Multi-Depth Learning with Multi-Attention for fine-grained image classification

被引:0
|
作者
Dai, Zuhua [1 ]
Li, Hongyi [1 ]
Li, Kelong [1 ]
Zhou, Anwei [1 ]
机构
[1] Northwest Normal Univ, Sch Comp Sci & Engn, Lanzhou, Peoples R China
来源
2020 INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND HUMAN-COMPUTER INTERACTION (ICHCI 2020) | 2020年
关键词
attention proposal; fine-grained image classification; multi-task learning;
D O I
10.1109/ICHCI51889.2020.00052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Compared with the traditional image classification task, fine-grained image classification has the difficulty of small differences between classes and large differences within classes. In view of this difficulty, attention proposal has been widely used in fine-grained image classification. However, traditional attention proposal has to localize first and then processing. Model needs to run step by step and the attention focusing method is single. This paper proposed a model (MAMDL, Multi-Attention-Multi-Depth-Learning) which combines multiple attention mechanisms and multi network parallel learning. The advantage of MAMDL is that it can first learn end-to-end. Secondly, the multiple attention mechanisms can effectively combine four attention mechanisms to improve the network's ability to process local features. Finally, this paper focuses on the attention found in the backbone network, Feature extraction from branch convolution neural networks with different depths enhances the classification performance of the model. The experimental results show that MAMDL outperforms mainstream fine-grained image classification methods on the fine-grained image classification dataset CUB-200, Stanford dogs and Stanford cars.
引用
收藏
页码:206 / 212
页数:7
相关论文
共 50 条
  • [31] A multi-task learning model with adversarial data augmentation for classification of fine-grained images
    Fu, Yan
    Li, Xutao
    Ye, Yunming
    NEUROCOMPUTING, 2020, 377 : 122 - 129
  • [32] Fine-Grained Image Classification Network Based on Reinforcement and Complementary Learning
    Jing, Hu
    Meng-Yao, Wang
    Fei, Wang
    Ru-Min, Zhang
    Bing-Quan, Lian
    IEEE ACCESS, 2024, 12 : 28810 - 28817
  • [33] Learning Mutually Exclusive Part Representations for Fine-Grained Image Classification
    Wang, Chuanming
    Fu, Huiyuan
    Ma, Huadong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3113 - 3124
  • [34] Multi-Task Multi-Head Attention Memory Network for Fine-Grained Sentiment Analysis
    Dai, Zehui
    Dai, Wei
    Liu, Zhenhua
    Rao, Fengyun
    Chen, Huajie
    Zhang, Guangpeng
    Ding, Yadong
    Liu, Jiyang
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING (NLPCC 2019), PT I, 2019, 11838 : 609 - 620
  • [35] Cost-Sensitive Deep Metric Learning for Fine-Grained Image Classification
    Zhao, Junjie
    Peng, Yuxin
    MULTIMEDIA MODELING, MMM 2018, PT I, 2018, 10704 : 130 - 141
  • [36] Feature relocation network for fine-grained image classification
    Zhao, Peng
    Li, Yi
    Tang, Baowei
    Liu, Huiting
    Yao, Sheng
    NEURAL NETWORKS, 2023, 161 : 306 - 317
  • [37] Improving Fine-Grained Image Classification With Multimodal Information
    Xu, Jie
    Zhang, Xiaoqian
    Zhao, Changming
    Geng, Zili
    Feng, Yuren
    Miao, Ke
    Li, Yunji
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 2082 - 2095
  • [38] A Fine-Grained Image Classification Method Built on MobileViT
    Lu, Zhengqiu
    Wang, Haiying
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2024, 38 (06)
  • [39] Robust fine-grained image classification with noisy labels
    Tan, Xinxing
    Dong, Zemin
    Zhao, Hualing
    VISUAL COMPUTER, 2022, 39 (11) : 5637 - 5650
  • [40] Grouping Bilinear Pooling for Fine-Grained Image Classification
    Zeng, Rui
    He, Jingsong
    APPLIED SCIENCES-BASEL, 2022, 12 (10):