HIPA: Hierarchical Patch Transformer for Single Image Super Resolution

被引:17
作者
Cai, Qing [1 ]
Qian, Yiming [2 ]
Li, Jinxing [3 ]
Lyu, Jun [4 ]
Yang, Yee-Hong [5 ]
Wu, Feng [6 ]
Zhang, David [7 ,8 ,9 ]
机构
[1] Ocean Univ China, Fac Informat Sci & Engn, Qingdao 266100, Shandong, Peoples R China
[2] Univ Manitoba, Dept Comp Sci, Winnipeg, MB R3T 2N2, Canada
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Guangdong, Peoples R China
[4] Hong Kong Polytech Univ, Sch Nursing, Hong Kong, Peoples R China
[5] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2E9, Canada
[6] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Anhui, Peoples R China
[7] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
[8] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518129, Guangdong, Peoples R China
[9] CUHK SZ Linkl Joint Lab Comp Vis & Artificial Inte, Shenzhen 518172, Guangdong, Peoples R China
基金
美国国家科学基金会; 加拿大自然科学与工程研究理事会;
关键词
Transformers; Feature extraction; Convolution; Image restoration; Superresolution; Visualization; Computer architecture; single image super-resolution; hierarchical patch transformer; attention-based position embedding; SUPERRESOLUTION; NETWORK;
D O I
10.1109/TIP.2023.3279977
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer-based architectures start to emerge in single image super resolution (SISR) and have achieved promising performance. However, most existing vision Transformer-based SISR methods still have two shortcomings: (1) they divide images into the same number of patches with a fixed size, which may not be optimal for restoring patches with different levels of texture richness; and (2) their position encodings treat all input tokens equally and hence, neglect the dependencies among them. This paper presents a HIPA, which stands for a novel Transformer architecture that progressively recovers the high resolution image using a hierarchical patch partition. Specifically, we build a cascaded model that processes an input image in multiple stages, where we start with tokens with small patch sizes and gradually merge them to form the full resolution. Such a hierarchical patch mechanism not only explicitly enables feature aggregation at multiple resolutions but also adaptively learns patch-aware features for different image regions, e.g., using a smaller patch for areas with fine details and a larger patch for textureless regions. Meanwhile, a new attention-based position encoding scheme for Transformer is proposed to let the network focus on which tokens should be paid more attention by assigning different weights to different tokens, which is the first time to our best knowledge. Furthermore, we also propose a multi-receptive field attention module to enlarge the convolution receptive field from different branches. The experimental results on several public datasets demonstrate the superior performance of the proposed HIPA over previous methods quantitatively and qualitatively. We will share our code and models when the paper is accepted.
引用
收藏
页码:3226 / 3237
页数:12
相关论文
共 71 条
[1]   NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study [J].
Agustsson, Eirikur ;
Timofte, Radu .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1122-1131
[2]   Real Image Denoising with Feature Attention [J].
Anwar, Saeed ;
Barnes, Nick .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3155-3164
[3]   Multi-Image Super-Resolution for Remote Sensing using Deep Recurrent Networks [J].
Arefin, Md Rifat ;
Michalski, Vincent ;
St-Charles, Pierre-Luc ;
Kalaitzis, Alfredo ;
Kim, Sookyung ;
Kahou, Samira E. ;
Bengio, Yoshua .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :816-825
[4]  
Ben Niu, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12357), P191, DOI 10.1007/978-3-030-58610-2_12
[5]   Low-Complexity Single-Image Super-Resolution based on Nonnegative Neighbor Embedding [J].
Bevilacqua, Marco ;
Roumy, Aline ;
Guillemot, Christine ;
Morel, Marie-Line Alberi .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2012, 2012,
[6]   TDPN: Texture and Detail-Preserving Network for Single Image Super-Resolution [J].
Cai, Qing ;
Li, Jinxing ;
Li, Huafeng ;
Yang, Yee-Hong ;
Wu, Feng ;
Zhang, David .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :2375-2389
[7]  
Cao JZ, 2023, Arxiv, DOI [arXiv:2106.06847, DOI 10.48550/ARXIV.2106.06847]
[8]   Weighted Couple Sparse Representation With Classified Regularization for Impulse Noise Removal [J].
Chen, Chun Lung Philip ;
Liu, Licheng ;
Chen, Long ;
Tang, Yuan Yan ;
Zhou, Yicong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) :4014-4026
[9]   Pre-Trained Image Processing Transformer [J].
Chen, Hanting ;
Wang, Yunhe ;
Guo, Tianyu ;
Xu, Chang ;
Deng, Yiping ;
Liu, Zhenhua ;
Ma, Siwei ;
Xu, Chunjing ;
Xu, Chao ;
Gao, Wen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :12294-12305
[10]  
Chu XX, 2021, ADV NEUR IN