Feature Estimations Based Correlation Distillation for Incremental Image Retrieval

被引:18
作者
Chen, Wei [1 ]
Liu, Yu [2 ]
Pu, Nan [1 ]
Wang, Weiping [3 ]
Liu, Li [3 ,4 ]
Lew, Michael S. [1 ]
机构
[1] Leiden Univ, Leiden Inst Adv Comp Sci, NL-2311 EZ Leiden, Netherlands
[2] Dalian Univ Technol, DUT RU Int Sch Informat Sci & Engn, Dalian 116024, Peoples R China
[3] NUDT, Coll Syst Engn, Changsha 410073, Peoples R China
[4] Univ Oulu, Ctr Machine Vis & Signal Anal, Oulu 90014, Finland
基金
中国国家自然科学基金;
关键词
Task analysis; Correlation; Data models; Modeling; Training; Context modeling; Image retrieval; Incremental learning; fine-grained image retrieval; correlations distillation; feature estimation; KNOWLEDGE;
D O I
10.1109/TMM.2021.3073279
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning for fine-grained image retrieval in an incremental context is less investigated. In this paper, we explore this task to realize the model's continuous retrieval ability. That means, the model enables to perform well on new incoming data and reduce forgetting of the knowledge learned on preceding old tasks. For this purpose, we distill semantic correlations knowledge among the representations extracted from the new data only so as to regularize the parameters updates using the teacher-student framework. In particular, for the case of learning multiple tasks sequentially, aside from the correlations distilled from the penultimate model, we estimate the representations for all prior models and further their semantic correlations by using the representations extracted from the new data. To this end, the estimated correlations are used as an additional regularization and further prevent catastrophic forgetting over all previous tasks, and it is unnecessary to save the stream of models trained on these tasks. Extensive experiments demonstrate that the proposed method performs favorably for retaining performance on the already-trained old tasks and achieving good accuracy on the current task when new data are added at once or sequentially.
引用
收藏
页码:1844 / 1856
页数:13
相关论文
共 57 条
  • [1] Expert Gate: Lifelong Learning with a Network of Experts
    Aljundi, Rahaf
    Chakravarty, Punarjay
    Tuytelaars, Tinne
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 7120 - 7129
  • [2] Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence
    Chaudhry, Arslan
    Dokania, Puneet K.
    Ajanthan, Thalaiyasingam
    Torr, Philip H. S.
    [J]. COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 : 556 - 572
  • [3] Distilling knowledge from ensembles of neural networks for speech recognition
    Chebotar, Yevgen
    Waters, Austin
    [J]. 17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 3439 - 3443
  • [4] Chen W.., 2020, PROC BRIT MACH VIS C, P1
  • [5] Del Chiaro R., 2020, Advances in Neural Information Processing Systems, V33, P16736
  • [6] Non-local Recurrent Neural Memory for Supervised Sequence Modeling
    Fu, Canmiao
    Pei, Wenjie
    Cao, Qiong
    Zhang, Chaopeng
    Zhao, Yong
    Shen, Xiaoyong
    Tai, Yu-Wing
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6320 - 6329
  • [7] Knowledge Distillation: A Survey
    Gou, Jianping
    Yu, Baosheng
    Maybank, Stephen J.
    Tao, Dacheng
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (06) : 1789 - 1819
  • [8] Hinton G., 2015, ARXIV, V2
  • [9] Lifelong Learning via Progressive Distillation and Retrospection
    Hou, Saihui
    Pan, Xinyu
    Loy, Chen Change
    Wang, Zilei
    Lin, Dahua
    [J]. COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 : 452 - 467
  • [10] TPCKT: Two-Level Progressive Cross-Media Knowledge Transfer
    Huang, Xin
    Peng, Yuxin
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (11) : 2850 - 2862