LCSL: Long-Tailed Classification via Self-Labeling

被引:0
作者
Duc-Quang Vu [1 ]
Phung, Trang T. T. [2 ]
Wang, Jia-Ching [3 ]
Mai, Son T. [4 ]
机构
[1] Thai Nguyen Univ Educ, Dept Comp Sci & Informat Syst, Thai Nguyen 250000, Vietnam
[2] Thai Nguyen Univ, Sch Foreign Language, Dept Basic Sci, Thai Nguyen 250000, Vietnam
[3] Natl Cent Univ, Dept Comp Sci & Informat Engn, Taoyuan 32001, Taiwan
[4] Queens Univ Belfast, Sch Elect Elect Engn & Comp Sci, Belfast BT7 1NN, Antrim, North Ireland
关键词
Image classification; long-tailed problem; self-labeling; imbalance classification;
D O I
10.1109/TCSVT.2024.3421942
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
During the last decades, deep learning (DL) has been proven to be a very powerful and successful technique in many real-world applications, e.g., video surveillance or object detection. However, when class label distributions are highly skewed, DL classifiers tend to be biased towards majority classes during training phases. This leads to poor generalization of minority classes and consequently reduces the overall accuracy. How to effectively deal with this long-tailed class distribution in DL, i.e., deep long-tailed classification (DLC), remains a challenging problem despite many research efforts. Among various approaches, data augmentation, which aims at generating more samples for reducing label imbalance, is the most common and practical one. However, simply relying on existing class-agnostic augmentation strategies without properly considering the label differences would worsen the problem since more head-class samples can be inevitably augmented than tailclass ones. Moreover, none of the existing works consider the quality and suitability of augmented samples during the training process. Our proposed approach, called Long-tailed Classification via Self-Labeling (LCSL), is specifically designed to address these limitations. LCSL fundamentally differs from existing works by the way it iteratively exploits the preceding network during the training process to re-label the labeled augmented samples and uses the output confidence to decide whether new samples belong to minority classes before adding them to the data. Not only does this help to reduce imbalance ratios among classes, but this also helps to reduce the uncertainty of class prediction problems for minority classes by selecting more confident samples to the data. This incremental learning and generating scheme thus provide a new robust approach for decreasing model over-fitting, thus enhancing the overall accuracy, especially for minority classes. Extensive experiments have demonstrated that LCSL acquires better performance than state-of-the-art long-tailed learning techniques on various standard benchmark datasets. More specifically, our LCSL obtains 85.8%, 54.4%, and 56.2% in terms of accuracy on CIFAR10-LT, CIFAR100-LT, and ImageNet-LT (with moderate to extreme imbalance ratios), respectively. The source code is available at https://github.com/vdquang1991/lcsl/.
引用
收藏
页码:12048 / 12058
页数:11
相关论文
共 62 条
  • [1] Long-Tailed Recognition via Weight Balancing
    Alshammari, Shaden
    Wang, Yu-Xiong
    Ramanan, Deva
    Kong, Shu
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6887 - 6897
  • [2] ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot
    Cai, Jiarui
    Wang, Yizhou
    Hwang, Jenq-Neng
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 112 - 121
  • [3] Domain Balancing: Face Recognition on Long-Tailed Domains
    Cao, Dong
    Zhu, Xiangyu
    Huang, Xingyu
    Guo, Jianzhu
    Lei, Zhen
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 5670 - 5678
  • [4] Cao KD, 2019, ADV NEUR IN, V32
  • [5] Multi-expert Attention Network with Unsupervised Aggregation for long-tailed fault diagnosis under speed variation
    Chen, Zhuohang
    Chen, Jinglong
    Xie, Zongliang
    Xu, Enyong
    Feng, Yong
    Liu, Shen
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 252
  • [6] Parametric Contrastive Learning
    Cui, Jiequan
    Zhong, Zhisheng
    Liu, Shu
    Yu, Bei
    Jia, Jiaya
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 695 - 704
  • [7] Class-Balanced Loss Based on Effective Number of Samples
    Cui, Yin
    Jia, Menglin
    Lin, Tsung-Yi
    Song, Yang
    Belongie, Serge
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 9260 - 9269
  • [8] Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning
    Cui, Yin
    Song, Yang
    Sun, Chen
    Howard, Andrew
    Belongie, Serge
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4109 - 4118
  • [9] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [10] Elkan C., 2001, Proceedings of the Seventeenth International Conference on Artificial Intelligence: 4-10 August 2001, P1