Balanced complement loss for long-tailed image classification

被引:0
作者
Hu, Luyu [1 ,2 ]
Yang, Zhao [1 ,2 ]
Dou, Yamei [1 ,2 ]
Li, Jiahao [1 ,2 ]
机构
[1] Guangzhou Univ, Sch Elect & Commun Engn, Guangzhou 510006, Peoples R China
[2] Guangzhou Univ, Huangpu Res & Grad Sch, Guangzhou 510555, Peoples R China
关键词
Long-tailed distribution; Class imbalance; Complement classes learning; Image classificationIntroduction;
D O I
10.1007/s11042-023-17583-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Long-tailed image classification is a significant research direction in the computer vision community, since many real-world image datasets exhibit an obvious long-tailed distribution. For a long-tailed dataset, head classes have most training samples, while tail classes occupy few training samples. In this scenario, samples of the head classes are treated as complement samples (all samples except the ground-truth class samples) of the tail classes, which generate overwhelmingly discouraging gradients on the tail classes. Consequently, the samples of tail classes are easily misclassified as the head classes. In view of this issue, we present a balanced complement (BACL) loss by introducing an adaptive weight for complement classes in the softmax cross-entropy loss. The adaptive weight helps to mitigate the overwhelming suppression of gradients from the complement samples for the tail classes, thereby balancing the gradient ratios between complement classes and ground-truth class. After that, we further propose a joint training framework by combining our method with normalized complement entropy (NCE) via a novel double-angle sine decay strategy. The proposed decay strategy is applied to adjust the contribution between the BACL and NCE losses at the different training stages. With our joint training framework, the model first learns useful information from the complement samples and then gradually turns its attention to the classification task. Experiments on long-tailed versions of CIFAR-10/100, SVHN and ImageNet-2012 datasets are conducted to reveal the significant effectiveness of the proposed methods.
引用
收藏
页码:52989 / 53007
页数:19
相关论文
共 47 条
  • [1] AN IMPROVED ALGORITHM FOR NEURAL-NETWORK CLASSIFICATION OF IMBALANCED TRAINING SETS
    ANAND, R
    MEHROTRA, KG
    MOHAN, CK
    RANKA, S
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (06): : 962 - 969
  • [2] Barandela R, 2003, LECT NOTES COMPUT SC, V2905, P424
  • [3] A systematic study of the class imbalance problem in convolutional neural networks
    Buda, Mateusz
    Maki, Atsuto
    Mazurowski, Maciej A.
    [J]. NEURAL NETWORKS, 2018, 106 : 249 - 259
  • [4] Cao KD, 2019, ADV NEUR IN, V32
  • [5] SMOTE: Synthetic minority over-sampling technique
    Chawla, Nitesh V.
    Bowyer, Kevin W.
    Hall, Lawrence O.
    Kegelmeyer, W. Philip
    [J]. 2002, American Association for Artificial Intelligence (16)
  • [6] Chen H.-Y., 2019, INT C LEARN REPR
  • [7] Class-Balanced Loss Based on Effective Number of Samples
    Cui, Yin
    Jia, Menglin
    Lin, Tsung-Yi
    Song, Yang
    Belongie, Serge
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 9260 - 9269
  • [8] Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning
    Cui, Yin
    Song, Yang
    Sun, Chen
    Howard, Andrew
    Belongie, Serge
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4109 - 4118
  • [9] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [10] Drummond C., 2003, INT C MACH LEARN WOR