Local Texture Estimator for Implicit Representation Function

被引:110
作者
Lee, Jaewon [1 ]
Jin, Kyong Hwan [1 ]
机构
[1] Daegu Gyeongbuk Inst Sci & Technol DGIST, Daegu, South Korea
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/CVPR52688.2022.00197
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent works with an implicit neural function shed light on representing images in arbitrary resolution. However, a standalone multi-layer perceptron shows limited performance in learning high frequency components. In this paper, we propose a Local Texture Estimator (LTE), a dominant frequency estimator for natural images, enabling an implicit function to capture fine details while reconstructing images in a continuous manner. When jointly trained with a deep super-resolution (SR) architecture, LTE is capable of characterizing image textures in 2D Fourier space. We show that an LTE-based neural function achieves favorable performance against existing deep SR methods within an arbitrary-scale factor. Furthermore, we demonstrate that our implementation takes the shortest running time compared to previous works.
引用
收藏
页码:1928 / 1937
页数:10
相关论文
共 35 条
[1]   NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study [J].
Agustsson, Eirikur ;
Timofte, Radu .
2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, :1122-1131
[2]  
Bevilacqua A, 2017, WOODHEAD PUBL FOOD S, P1
[3]   Pre-Trained Image Processing Transformer [J].
Chen, Hanting ;
Wang, Yunhe ;
Guo, Tianyu ;
Xu, Chang ;
Deng, Yiping ;
Liu, Zhenhua ;
Ma, Siwei ;
Xu, Chunjing ;
Xu, Chao ;
Gao, Wen .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :12294-12305
[4]  
Chen Yinbo, 2006, P IEEE CVF C COMP VI, P8628
[5]  
Chiyu Max, 2020, IEEE CVF C COMP VIS
[6]  
Dai Tao, 2019, P IEEE CVF C COMP VI, V1, P2
[7]  
Dosovitskiy A., 2021, P 9 INT C LEARN REPR
[8]  
Ha D., 2017, P INT C LEARN REPR I
[9]   MULTILAYER FEEDFORWARD NETWORKS ARE UNIVERSAL APPROXIMATORS [J].
HORNIK, K ;
STINCHCOMBE, M ;
WHITE, H .
NEURAL NETWORKS, 1989, 2 (05) :359-366
[10]  
Hu Xuecai, 2006, P IEEE CVF C COMP VI