Learning from History: Task-agnostic Model Contrastive Learning for Image Restoration

被引:0
作者
Wu, Gang [1 ]
Jiang, Junjun [1 ]
Jiang, Kui [1 ]
Liu, Xianming [1 ]
机构
[1] Harbin Inst Technol, rac Comp, Harbin 150001, Peoples R China
来源
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6 | 2024年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contrastive learning has emerged as a prevailing paradigm for high-level vision tasks, which, by introducing properly negative samples, has also been exploited for low-level vision tasks to achieve a compact optimization space to account for their ill-posed nature. However, existing methods rely on manually predefined and task-oriented negatives, which often exhibit pronounced task-specific biases. To address this challenge, our paper introduces an innovative method termed 'learning from history', which dynamically generates negative samples from the target model itself. Our approach, named Model Contrastive Learning for Image Restoration (MCLIR), rejuvenates latency models as negative models, making it compatible with diverse image restoration tasks. We propose the Self-Prior guided Negative loss (SPN) to enable it. This approach significantly enhances existing models when retrained with the proposed model contrastive paradigm. The results show significant improvements in image restoration across various tasks and architectures. For example, models retrained with SPN outperform the original FFANet and DehazeFormer by 3.41 and 0.57 dB on the RESIDE indoor dataset for image dehazing. Similarly, they achieve notable improvements of 0.47 dB on SPA-Data over IDT for image deraining and 0.12 dB on Manga109 for a 4x scale super-resolution over lightweight SwinIR, respectively. Code and retrained models are available at https://github.com/Aitical/MCLIR.
引用
收藏
页码:5976 / 5984
页数:9
相关论文
共 50 条
[11]   EViLBERT: Learning Task-Agnostic Multimodal Sense Embeddings [J].
Calabrese, Agostina ;
Bevilacqua, Michele ;
Navigli, Roberto .
PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, :481-487
[12]   Task-Agnostic Dynamics Priors for Deep Reinforcement Learning [J].
Du, Yilun ;
Narasimhan, Karthik .
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
[13]   Task-agnostic feature extractors for incremental learning at the edge [J].
Loomis, Lisa ;
Wise, David ;
Inkawhich, Nathan ;
Thiem, Clare ;
McDonald, Nathan .
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS VI, 2024, 13051
[14]   Interesting Object, Curious Agent: Learning Task-Agnostic Exploration [J].
Parisi, Simone ;
Dean, Victoria ;
Pathak, Deepak ;
Gupta, Abhinav .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
[15]   Continual deep reinforcement learning with task-agnostic policy distillation [J].
Hafez, Muhammad Burhan ;
Erekmen, Kerim .
SCIENTIFIC REPORTS, 2024, 14 (01)
[16]   Continual Deep Reinforcement Learning with Task-Agnostic Policy Distillation [J].
Hafez, Muhammad Burhan ;
Erekmen, Kerim .
arXiv,
[17]   Task-agnostic representation learning of multimodal twitter data for downstream applications [J].
Ryan Rivas ;
Sudipta Paul ;
Vagelis Hristidis ;
Evangelos E. Papalexakis ;
Amit K. Roy-Chowdhury .
Journal of Big Data, 9
[18]   A Task-Agnostic Regularizer for Diverse Subpolicy Discovery in Hierarchical Reinforcement Learning [J].
Huo, Liangyu ;
Wang, Zulin ;
Xu, Mai ;
Song, Yuhang .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (03) :1932-1944
[19]   TASK-AGNOSTIC CONTINUAL LEARNING USING BASE-CHILD CLASSIFIERS [J].
Singh, Pranshu Ranjan ;
Gopalakrishnan, Saisubramaniam ;
Qiao ZhongZheng ;
Suganthan, Ponnuthurai N. ;
Ramasamy, Savitha ;
Ambikapathi, ArulMurugan .
2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, :794-798
[20]   Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes [J].
Xu, Mengdi ;
Ding, Wenhao ;
Zhu, Jiacheng ;
Liu, Zuxin ;
Chen, Baiming ;
Zhao, Ding .
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33