TAPE: Task-Agnostic Prior Embedding for Image Restoration

被引:26
作者
Liu, Lin [1 ]
Xie, Lingxi [3 ]
Zhang, Xiaopeng [3 ]
Yuan, Shanxin [4 ]
Chen, Xiangyu [5 ,6 ]
Zhou, Wengang [1 ,2 ]
Li, Houqiang [1 ,2 ]
Tian, Qi [3 ]
机构
[1] Univ Sci & Technol China, EEIS Dept, CAS Key Lab Technol GIPAS, Hefei, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
[3] Huawei Cloud BU, Shenzhen, Peoples R China
[4] Huawei Noahs Ark Lab, London, England
[5] Univ Macau, Zhuhai, Peoples R China
[6] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
来源
COMPUTER VISION - ECCV 2022, PT XVIII | 2022年 / 13678卷
基金
中国国家自然科学基金;
关键词
REMOVAL; NETWORK;
D O I
10.1007/978-3-031-19797-0_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning a generalized prior for natural image restoration is an important yet challenging task. Early methods mostly involved handcrafted priors including normalized sparsity,l(0) gradients, dark channel priors, etc. Recently, deep neural networks have been used to learn various image priors but do not guarantee to generalize. In this paper, we propose a novel approach that embeds a task-agnostic prior into a transformer. Our approach, named Task-Agnostic Prior Embedding (TAPE), consists of two stages, namely, task-agnostic pre-training and task-specific fine-tuning, where the first stage embeds prior knowledge about natural images into the transformer and the second stage extracts the knowledge to assist downstream image restoration. Experiments on various types of degradation validate the effectiveness of TAPE. The image restoration performance in terms of PSNR is improved by as much as 1.45 dB and even outperforms task-specific algorithms. More importantly, TAPE shows the ability of disentangling generalized image priors from degraded images, which enjoys favorable transfer ability to unknown downstream tasks.
引用
收藏
页码:447 / 464
页数:18
相关论文
共 80 条
[1]   A High-Quality Denoising Dataset for Smartphone Cameras [J].
Abdelhamed, Abdelrahman ;
Lin, Stephen ;
Brown, Michael S. .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1692-1700
[2]   NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study [J].
Agustsson, Eirikur ;
Timofte, Radu .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1122-1131
[3]   Variational Bayesian Blind Deconvolution Using a Total Variation Prior [J].
Babacan, S. Derin ;
Molina, Rafael ;
Katsaggelos, Aggelos K. .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2009, 18 (01) :12-26
[4]  
Baek K., 2021, INT C COMPUTER VISIO
[5]  
Bau D, 2020, Arxiv, DOI [arXiv:2005.07727, 10.48550/arXiv:2005.07727]
[6]  
Brock A., 2018, INT C LEARN REPR
[7]  
Chan KCK, 2020, Arxiv, DOI arXiv:2012.00739
[8]  
Cao J., ICCVW 2021
[9]  
Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13
[10]   Pre-Trained Image Processing Transformer [J].
Chen, Hanting ;
Wang, Yunhe ;
Guo, Tianyu ;
Xu, Chang ;
Deng, Yiping ;
Liu, Zhenhua ;
Ma, Siwei ;
Xu, Chunjing ;
Xu, Chao ;
Gao, Wen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :12294-12305