Learning from History: Task-agnostic Model Contrastive Learning for Image Restoration

被引:0
|
作者
Wu, Gang [1 ]
Jiang, Junjun [1 ]
Jiang, Kui [1 ]
Liu, Xianming [1 ]
机构
[1] Harbin Inst Technol, rac Comp, Harbin 150001, Peoples R China
来源
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6 | 2024年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contrastive learning has emerged as a prevailing paradigm for high-level vision tasks, which, by introducing properly negative samples, has also been exploited for low-level vision tasks to achieve a compact optimization space to account for their ill-posed nature. However, existing methods rely on manually predefined and task-oriented negatives, which often exhibit pronounced task-specific biases. To address this challenge, our paper introduces an innovative method termed 'learning from history', which dynamically generates negative samples from the target model itself. Our approach, named Model Contrastive Learning for Image Restoration (MCLIR), rejuvenates latency models as negative models, making it compatible with diverse image restoration tasks. We propose the Self-Prior guided Negative loss (SPN) to enable it. This approach significantly enhances existing models when retrained with the proposed model contrastive paradigm. The results show significant improvements in image restoration across various tasks and architectures. For example, models retrained with SPN outperform the original FFANet and DehazeFormer by 3.41 and 0.57 dB on the RESIDE indoor dataset for image dehazing. Similarly, they achieve notable improvements of 0.47 dB on SPA-Data over IDT for image deraining and 0.12 dB on Manga109 for a 4x scale super-resolution over lightweight SwinIR, respectively. Code and retrained models are available at https://github.com/Aitical/MCLIR.
引用
收藏
页码:5976 / 5984
页数:9
相关论文
共 50 条
  • [1] Task-Agnostic Safety for Reinforcement Learning
    Rahman, Md Asifur
    Alqahtani, Sarra
    PROCEEDINGS OF THE 16TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2023, 2023, : 139 - 148
  • [2] Task-Agnostic Vision Transformer for Distributed Learning of Image Processing
    Kim, Boah
    Kim, Jeongsol
    Ye, Jong Chul
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 203 - 218
  • [3] Task-agnostic Exploration in Reinforcement Learning
    Zhang, Xuezhou
    Ma, Yuzhe
    Singla, Adish
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] TAPE: Task-Agnostic Prior Embedding for Image Restoration
    Liu, Lin
    Xie, Lingxi
    Zhang, Xiaopeng
    Yuan, Shanxin
    Chen, Xiangyu
    Zhou, Wengang
    Li, Houqiang
    Tian, Qi
    COMPUTER VISION - ECCV 2022, PT XVIII, 2022, 13678 : 447 - 464
  • [5] Loss Decoupling for Task-Agnostic Continual Learning
    Liang, Yan-Shuo
    Li, Wu-Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Hierarchically structured task-agnostic continual learning
    Heinke Hihn
    Daniel A. Braun
    Machine Learning, 2023, 112 : 655 - 686
  • [7] Hierarchically structured task-agnostic continual learning
    Hihn, Heinke
    Braun, Daniel A.
    MACHINE LEARNING, 2023, 112 (02) : 655 - 686
  • [8] Towards a Task-Agnostic Model of Difficulty Estimation for Supervised Learning Tasks
    Laverghetta, Antonio, Jr.
    Mirzakhalov, Jamshidbek
    Licato, John
    AACL-IJCNLP 2020: THE 1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING: PROCEEDINGS OF THE STUDENT RESEARCH WORKSHOP, 2020, : 16 - 23
  • [9] Learning Task-Agnostic Action Spaces for Movement Optimization
    Babadi, Amin
    van de Panne, Michiel
    Liu, C. Karen
    Hamalainen, Perttu
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2022, 28 (12) : 4700 - 4712
  • [10] EViLBERT: Learning Task-Agnostic Multimodal Sense Embeddings
    Calabrese, Agostina
    Bevilacqua, Michele
    Navigli, Roberto
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 481 - 487