Active diversification of head-class features in bilateral-expert models for enhanced tail-class optimization in long-tailed classification

被引:1
作者
Chen, Jianting [1 ]
Ding, Ling [1 ]
Yang, Yunxiao [1 ]
Xiang, Yang [1 ]
机构
[1] Tongji Univ, Coll Elect & Informat Engn, Shanghai, Peoples R China
关键词
Imbalanced classification; Long-tailed distribution; Mutual information; Ensemble learning; SMOTE;
D O I
10.1016/j.engappai.2023.106982
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Training deep learning models on long-tailed datasets is a challenging task since the classification performance of tail classes with fewer samples is always unsatisfactory. Currently, many long-tailed methods have achieved success. However, some methods always improve tail-class performance at the expense of head class performance due to limited model capability. To address this issue, we propose a novel algorithm-level method inspired by information theory to balance the information space of each class and boost tail-class performance while minimizing head-class sacrifice. Our method involves actively eliminating the redundant feature information of head classes to save space for tail classes during training. Specifically, we use a bilateral expert model and design a duplicate information disentanglement (DID) module that can extract duplicate and redundant information from bilateral-expert features. This allows us to develop a head diversity loss to decrease the extracted duplicate and redundant information of head classes and a tail distillation loss to increase the label information of tail classes. The joint result of these two losses allows our model to fully leverage the information space for improved tail-class performance without compromising head-class performance. The effectiveness and practicability of our method are verified by five datasets with long-tailed distributions for visual recognition or fault diagnosis tasks. Experimental results demonstrate that our method outperforms currently available mainstream methods, which we attribute to the effectiveness of our proposed DID module and the incorporation of two long-tailed losses.
引用
收藏
页数:16
相关论文
共 64 条
[1]   ACE: Ally Complementary Experts for Solving Long-Tailed Recognition in One-Shot [J].
Cai, Jiarui ;
Wang, Yizhou ;
Hwang, Jenq-Neng .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :112-121
[2]  
Cao KD, 2019, ADV NEUR IN, V32
[3]   SMOTE: Synthetic minority over-sampling technique [J].
Chawla, Nitesh V. ;
Bowyer, Kevin W. ;
Hall, Lawrence O. ;
Kegelmeyer, W. Philip .
2002, American Association for Artificial Intelligence (16)
[4]   Imbalance fault diagnosis under long-tailed distribution: Challenges, solutions and prospects [J].
Chen, Zhuohang ;
Chen, Jinglong ;
Feng, Yong ;
Liu, Shen ;
Zhang, Tianci ;
Zhang, Kaiyu ;
Xiao, Wenrong .
KNOWLEDGE-BASED SYSTEMS, 2022, 258
[5]   Multi-expert Attention Network with Unsupervised Aggregation for long-tailed fault diagnosis under speed variation [J].
Chen, Zhuohang ;
Chen, Jinglong ;
Xie, Zongliang ;
Xu, Enyong ;
Feng, Yong ;
Liu, Shen .
KNOWLEDGE-BASED SYSTEMS, 2022, 252
[6]  
Cui J., 2021, arXiv
[7]   ResLT: Residual Learning for Long-Tailed Recognition [J].
Cui, Jiequan ;
Liu, Shu ;
Tian, Zhuotao ;
Zhong, Zhisheng ;
Jia, Jiaya .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) :3695-3706
[8]   Class-Balanced Loss Based on Effective Number of Samples [J].
Cui, Yin ;
Jia, Menglin ;
Lin, Tsung-Yi ;
Song, Yang ;
Belongie, Serge .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :9260-9269
[9]   Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning [J].
Cui, Yin ;
Song, Yang ;
Sun, Chen ;
Howard, Andrew ;
Belongie, Serge .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :4109-4118
[10]   Data reduction and stacking for imbalanced data classification [J].
Czarnowski, Ireneusz ;
Jedrzejowicz, Piotr .
JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 37 (06) :7239-7249