CIFDM: Continual and Interactive Feature Distillation for Multi-Label Stream Learning

被引:9
|
作者
Wang, Yigong [1 ]
Wang, Zhuoyi [1 ]
Lin, Yu [1 ]
Khan, Latifur [1 ]
Li, Dingcheng [2 ]
机构
[1] Univ Texas Dallas, Richardson, TX 75083 USA
[2] Amazon Alexa AI, Seattle, WA USA
来源
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL | 2021年
关键词
Stream mining; Multi-label; Neural network; Incremental learning;
D O I
10.1145/3404835.3463096
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-label learning algorithms have attracted more and more attention as of recent. This is mainly because real-world data is generally associated with multiple and non-exclusive labels, which could correspond to different objects, scenes, actions, and attributes. In this paper, we consider the following challenging multi-label stream scenario: the new labels emerge continuously in the changing environments, and are assigned to the previous data. In this setting, data mining solutions must be able to learn the new concepts and avoid catastrophic forgetting simultaneously. We propose a novel continual and interactive feature distillation-based learning framework (CIFDM), to effectively classify instances with novel labels. We utilize the knowledge from the previous tasks to learn new knowledge to solve the current task. Then, the system compresses historical and novel knowledge and preserves it while waiting for new emerging tasks. CIFDM consists of three components: 1) a knowledge bank that stores the existing feature-level compressed knowledge, and predicts the observed labels so far; 2) a pioneer module that aims to learn and predict new emerged labels based on knowledge bank.; 3) an interactive knowledge compression function which is used to compress and transfer the new knowledge to the bank, and then apply the current compressed knowledge to initialize the label embedding of the pioneer for the next task.
引用
收藏
页码:2121 / 2125
页数:5
相关论文
共 50 条
  • [31] Collaboration based multi-modal multi-label learning
    Zhang, Yi
    Zhu, Yinlong
    Zhang, Zhecheng
    Wang, Chongjung
    APPLIED INTELLIGENCE, 2022, 52 (12) : 14204 - 14217
  • [32] Instance Annotation for Multi-Instance Multi-Label Learning
    Briggs, Forrest
    Fern, Xiaoli Z.
    Raich, Raviv
    Lou, Qi
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2013, 7 (03)
  • [33] Global and local multi-view multi-label learning
    Zhu, Changming
    Miao, Duoqian
    Wang, Zhe
    Zhou, Rigui
    Wei, Lai
    Zhang, Xiafen
    NEUROCOMPUTING, 2020, 371 : 67 - 77
  • [34] Collaboration based multi-modal multi-label learning
    Yi Zhang
    Yinlong Zhu
    Zhecheng Zhang
    Chongjung Wang
    Applied Intelligence, 2022, 52 : 14204 - 14217
  • [35] Multi-Label Learning with Regularization Enriched Label-Specific Features
    Chen, Ze-Sen
    Zhang, Min-Ling
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 411 - 424
  • [36] A multi-objective algorithm for multi-label filter feature selection problem
    Dong, Hongbin
    Sun, Jing
    Li, Tao
    Ding, Rui
    Sun, Xiaohang
    APPLIED INTELLIGENCE, 2020, 50 (11) : 3748 - 3774
  • [37] Multi-label classification via incremental clustering on an evolving data stream
    Tien Thanh Nguyen
    Manh Truong Dang
    Anh Vu Luong
    Liew, Alan Wee-Chung
    Liang, Tiancai
    McCall, John
    PATTERN RECOGNITION, 2019, 95 : 96 - 113
  • [38] LAMB: A novel algorithm of label collaboration based multi-label learning
    Zhang, Yi
    Zhang, Zhecheng
    Chen, Mingyuan
    Lu, Hengyang
    Zhang, Lei
    Wang, Chongjun
    INTELLIGENT DATA ANALYSIS, 2022, 26 (05) : 1229 - 1245
  • [39] A multi-objective algorithm for multi-label filter feature selection problem
    Hongbin Dong
    Jing Sun
    Tao Li
    Rui Ding
    Xiaohang Sun
    Applied Intelligence, 2020, 50 : 3748 - 3774
  • [40] Deep Learning with a Rethinking Structure for Multi-label Classification
    Yang, Yao-Yuan
    Lin, Yi-An
    Chu, Hong-Min
    Lin, Hsuan-Tien
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 125 - 140