Exploring Category-Shared and Category-Specific Features for Fine-Grained Image Classification

被引:0
|
作者
Wang, Haoyu [1 ]
Chang, DongLiang [1 ]
Liu, Weidong [3 ]
Xiao, Bo [1 ]
Ma, Zhanyu [1 ,2 ]
Guo, Jun [1 ]
Chang, Yaning [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing 100876, Peoples R China
[2] Beijing Acad Artificial Intelligence, Beijing 100876, Peoples R China
[3] China Mobile Res Inst, Beijing 100876, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PT I | 2021年 / 13019卷
基金
中国国家自然科学基金; 北京市自然科学基金; 国家重点研发计划;
关键词
Fine-grained image classification; Semantic intra-class similarity; Channel-wise attention; Spatial-wise attention;
D O I
10.1007/978-3-030-88004-0_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The attention mechanism is one of the most vital branches to solve fine-grained image classification (FGIC) tasks, while most existing attention-based methods only focus on inter-class variance and barely model the intra-class similarity. They perform the classification tasks by enhancing inter-class variance, which narrows down the intra-class similarity indirectly. In this paper, we intend to utilize the intra-class similarity as assistance to improve the classification performance of the obtained attention feature maps. To obtain and utilize the intra-class information, a novel attention mechanism, named category-shared and category-specific feature extraction module (CSS-FEM) is proposed in this paper. CSS-FEM firstly extracts the category-shared features based on the intra-class semantic relationship, then focuses on the discriminative parts. CSS-FEM is assembled by two parts: 1) The category-shared feature extraction module extracts category-shared features that contain high intra-class semantic similarity, to reduce the large intra-class variances. 2) The category-specific feature extraction module performs spatial-attention mechanism in category-shared features to find the discriminative information as category-specific features to decrease the high inter-class similarity. Compared with the state-of-the-art methods, the experimental results on three commonly used FGIC datasets show that the effectiveness and competitiveness of the proposed CSS-FEM. Ablation experiments and visualizations are also provided for further demonstrations.
引用
收藏
页码:179 / 190
页数:12
相关论文
共 50 条
  • [11] Grouping Bilinear Pooling for Fine-Grained Image Classification
    Zeng, Rui
    He, Jingsong
    APPLIED SCIENCES-BASEL, 2022, 12 (10):
  • [12] Aggregate attention module for fine-grained image classification
    Wang, Xingmei
    Shi, Jiahao
    Fujita, Hamido
    Zhao, Yilin
    JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, 2021, 14 (7) : 8335 - 8345
  • [13] Robust fine-grained image classification with noisy labels
    Xinxing Tan
    Zemin Dong
    Hualing Zhao
    The Visual Computer, 2023, 39 : 5637 - 5650
  • [14] Aggregate attention module for fine-grained image classification
    Xingmei Wang
    Jiahao Shi
    Hamido Fujita
    Yilin Zhao
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 : 8335 - 8345
  • [15] Exploiting spatial relation for fine-grained image classification
    Qi, Lei
    Lu, Xiaoqiang
    Li, Xuelong
    PATTERN RECOGNITION, 2019, 91 : 47 - 55
  • [16] Symmetrical irregular local features for fine-grained visual classification
    Yang, Ming
    Xu, Yang
    Wu, Zebin
    Wei, Zhihui
    NEUROCOMPUTING, 2022, 505 : 304 - 314
  • [17] An Interactive Deep Learning Method For Fine-grained Image Classification
    Luo, Liumin
    Wang, Mingxia
    Liu, Xiaoqing
    JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2025, 28 (04): : 701 - 708
  • [18] Fine-Grained Image Classification with Object-Part Model
    Hong, Jinlong
    Huang, Kaizhu
    Liang, Hai-Ning
    Wang, Xinheng
    Zhang, Rui
    ADVANCES IN BRAIN INSPIRED COGNITIVE SYSTEMS, 2020, 11691 : 233 - 243
  • [19] Bilinear Residual Attention Networks for Fine-Grained Image Classification
    Wang Yang
    Liu Libo
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (12)
  • [20] Fine-Grained Image Classification Model Based on Improved Transformer
    Tian Zhansheng
    Liu Libo
    LASER & OPTOELECTRONICS PROGRESS, 2023, 60 (02)